We’ve entered a new age based on technology and huge volumes of data. All these data will help to shape our future world.What this means for us is we can make smarter decisions and act faster. But it hinges on our ability to read and make sense of it all.The problem this creates is with […]
We’ve entered a new age based on technology and huge volumes of data. All these data will help to shape our future world.
What this means for us is we can make smarter decisions and act faster. But it hinges on our ability to read and make sense of it all.
The problem this creates is with so much data, how do we really make sense of it all?
Let’s look at the example of Google Person Finder.
In 2010, one of the largest earthquakes on record hit Haiti: 316,000 dead and over 3 million directly affected by this tragedy. Added to the horrors, thousands of people went missing.
In 2011, the world watched another major disaster. An earthquake hit just off the coast of Japan. It was so powerful that it moved Japan’s main island (Honshu) 8 feet east. The follow-on tsunami killed thousands, with thousands going missing.
Previously, there was no single reference point to find missing people in crises like these. At the time of these disasters, data flooded into networks and databases of different aid agencies. These data came from response teams, individuals, and aid agencies.
There was so much data that they jammed databases. As more data arrived, it became too difficult to filter them all and find missing people. What should have taken hours took days or weeks to do. Another problem was there was no collaboration of data across different agencies.
At this point, Google stepped in and did what government couldn’t. With the software and technology they had at their disposal, they created a single search point to find missing persons. They called it “Google Person Finder.”
This meant people could jump onto Google Person Finder to search for missing loved ones. This was a better outcome for those seeking information. It also took the strain off emergency response teams trying to process all this information.
Now, this wouldn’t have been possible without two crucial parts to the equation:
1. A huge amount of data.
2. The technology to interpret all the data.
A Lifetime of Books Created Every DayPeople who can read and interpret data the best and fastest will be so-called “wizards” of the future.
You’ll be shocked at how much data we create on a daily basis. Someone has to make sense of it all. Humans can’t do it on their own. Computers lack the human creativeness of data analysis. So entrepreneurial companies realize they must put the two together.
Just how much data am I talking about? IBM estimates we create about 2.5 exabytes of data every single day.
That’s the equivalent of about 625 million DVDs worth of new data, per day. Another way to look at it is that it’s more data than what’s in every book ever written.
Cisco Systems estimates by 2016, global Internet Protocol traffic will be over 110 exabytes per month. Added to this, global mobile data traffic will be over 11 exabytes of traffic per month.
All these data come from devices and sensors. Your phone, GPS, weather stations, CCTV, things you post online, new websites, etc. Data come from everywhere.
The collective term for all these data is very technical… it’s called “Big Data.”
But the Government Might Kick Down My Door!
I recently asked a group of friends the question, “Are you worried that you give away too much information?” The overwhelming answer was yes.
Our basic human nature means we want to keep personal information to ourselves. It’s rooted in a feeling of mistrust. Mistrust of governments and major corporations. And I can understand that.
However, there does seem to be an Orwellian belief that as we activate the GPS function on our smartphone, Big Brother will know exactly which cheek we just scratched.
There are also theories about the humble online search. Look up words such as “terrorism” or “al-Qaida” and the NSA, CIA, and FBI will kick in your front door. Next thing you know, you’ll be strung upside down at an “undisclosed location.”
Let’s think about this. What if it were harder for governments and organizations to know about us if we gave them more data? What if there were so much Big Data that they couldn’t tell the difference between a man and a mouse?
I know it sounds a bit daft, but there’s method to my madness. What if we could overload the systems of organizations by simply creating too much data for them?
This serves a double purpose. Blast a system with too much data and it overloads. It becomes a thick fog of Big Data. In short, it doesn’t have the technology to make sense of it all.
It could mean we have the information available to allow us to interact with our digital environments more efficiently, yet also hide from those that shouldn’t be able to see what we’re up to.
Alvin Toffler, a renowned writer and futurist, before the Internet even existed coined the term “Big Data” in his book Future Shock. He described it simply as “information overload.”
In addition, it’s private industry, not governments, that have the technology and software to process Big Data and make sense of it. These companies are your typical Silicon Valley startups that have built their businesses around Big Data.
To make sense and draw out legitimate, meaningful information from enormous data sets can make a company. When governments don’t have the capabilities to make sense of their data, they turn to those who do.
This Company Does What No Others Have Done BeforeOne example of this is a company called Palantir Technologies. They are a software provider. And their software helps organizations like the CIA and FBI interpret Big Data.
Palantir does more than just help government agencies. They also work with financial, scientific, and humanitarian organizations to help them make better decisions, helping find answers that are difficult to see in the fog of data.
Palantir is a known term for fans of J.R.R Tolkien. To explain, Palantir are the “seeing stones” from The Lord of the Rings. And that’s what Palantir believe their technology is. It’s the “seeing stones” of Big Data.
A big part of the work Palantir does is rooted in their mission: to make sense of Big Data whilst maintaining civil liberties. As Palantir describes it:
“a core component of (our) mission is protecting our fundamental rights to privacy and civil liberties. Since its inception, Palantir has invested its intellectual and financial capital in engineering technology that can be used to solve the world’s hardest problems while simultaneously protecting individual liberty.”
In this sense, what does protecting civil liberties mean? That’s the trick part of Palantir’s technology. They can tag and screen data from the source. This means they can enable or hide data based on different authority levels.
A good example would be a medical researcher with a huge database of DNA information. They would overlay Palantir’s software on the database to find connections, links, and patterns.
If the police wanted to use the database to link crimes to particular DNA matches, they could… but they could not use that information against a linked person.
The data get a tag to say they’ve been obtained by the medical institution, not the police. So the linked person’s identity remains undisclosed.
Palantir software is world-leading. It carefully balances the smarter use of data and the protection of privacy and civil liberties. That’s important, as we must trust that our data are used for purposes of good, not evil.
Another example is the 2012 London Olympics. London City police used an app and location services to create “heat maps” of crowds around London. This helped monitor gatherings and control traffic flow.
They did this by sending out alerts and updates to app users. It created smoother flow of foot traffic, and avoided crushes at events and overcrowding at tube stations.
Sadly, the recent the bombing in Boston provides us with another example. It’s likely this will be the largest investigation ever using “crowd-sourced” data. The collection of images and data from the Internet is on a scale never seen before.
Google launched Person Finder again this week for the Boston crisis. And you can safely say Palantir’s software is running at full steam. No doubt the FBI is using it help track down the people that committed these terrible crimes. [Ed. Note: This article was written before the eventual death and capture of the two Boston bombing suspects.]
We will continue to create more data whether we like it or not. As we connect our digital lives with our day-to-day living, more data will flow from our activities to databases around the world.
We’re inadvertently creating an information overload. And that’s not a bad thing.
Google and Palantir are just two examples of companies that work to ensure we use information properly. In a way that helps us. Now, that has to be a good thing in times of crisis.
This article originally appeared in The Daily Reckoning.