As we share ever more of our lives online, Big Data – the cumulative sum of worldwide use of computers, smartphones, and other Internet-connected devices – paints an increasingly complete picture of who we are, where we go, and how we spend our time. The collection and use of such sensitive data by third party companies have begun to pose unique problems for both individuals and society as a whole.
Big Data Security and Privacy Concerns
Big Data analysis strategies change daily, and the full implications of its collection and use are still unknown. Two primary concerns dominate the discussion: security risks resulting from irresponsible Big Data management and the effect data collection can have on users’ privacy.
- Data consumers fail to monitor access. Storing such a large amount of data, of various types and from various sources, makes it difficult to track who has viewed what. A 2014 survey by Information Week found that 28% of Big Data consumers only encrypt “some” of the sensitive information in their databases – and 18% don’t encrypt at all.
- The data is not stored locally. Sets of Big Data are too large to store on servers in house, so they are outsourced to cloud services and remote servers. This complicates the audit process, and makes it more difficult to determine all parts of the data used in a particular query. Thus, a breach or misuse may go unnoticed.
Technology Review indicates that outdated security approaches and lack of data use polices are often also to blame.
Data mining and analysis is a multibillion dollar industry. More personal information is online than ever before – and data brokers are capitalizing on the supply. Big Data Privacy is a concern because of “data brokers” like Acxiom, a marketing technology company that reported having information on over 700 million consumers worldwide in 2014. Data brokers collect and evaluate Big Data, then sell the consumer insights they unearth to other businesses, who use the information to market and sell their services more effectively.
Meanwhile, China is making headlines for experimenting with Big Data to create a possible Social Credit System, which functions something like a super-detailed credit score to evaluate a person’s overall character. Factors include driving record, work performance reviews, number and quality of friends on social media, buying habits, and more. Like credit history, such a system could be used to recommend (or reject) a person for a job, a loan, social services like welfare – or even a date, if the score were made public, following the lead of popular app Credit Sesame.
If that sounds unfathomable to you, keep in mind that Americans are monitored just as closely online by the National Security Agency, post-9/11. The NSA stated in summer 2015 that 54 terrorist attacks have been disrupted by Big Data monitoring, at a public cost of about $5 billion per year. There’s no talk of a U.S. Social Credit System yet – but the data is out there, and the algorithms for creating a social “score” out of that data already exist.
The future of Big Data is still unclear. In the right hands, it is a valuable source of knowledge that can make the world a safer, healthier place to live; in the wrong hands, it can promote identity theft, invasion of privacy, and possibly worse. While minimizing your Big Data footprint on social media is possible, that may not be the answer. The best defense is understanding how your data becomes accessible to others, and being intentional about what – and how much – you share online.