The Future of Big Data: the Answer is Not “42”

Blog
The best minds from Teradata, our partners, and customers blog about whatever takes their fancy.
Teradata Employee

“42” is the Answer to the Ultimate Question of Life, the Universe, and Everything.  But for Big Data, the answer is simply “everything.” 

As we progress with our gadgets and sensors, we have grown the amount of human activity on which we can now and will in the future monitor and analyze.  First we monitored and captured business transactions:  sales, inventory, computer and telco networks, etc.  With the advent of social media, we monitor social interactions online:  tweets, likes, posts, texts, email. Pictures, with their 1,000 words, yield a vast amount of data that you may not intend to state directly.

With the addition of physical devices that can report on body location and activities, retailers and insurance providers that track physical activity can sell and price based on location data.  Of course, with shoppers “show rooming” and comparison shopping using their mobile devices, they are not shy about letting organizations know where they are.  In addition, businesses do not need a gadget to understand your interests:  the eyes are the window to the soul.  And a license plate is sufficient for the insurers.

Now with the addition of physical devices that can report on body location and activities – “wearables” – we can monitor and report on practically all human activity:  physical activity (steps) and the condition of the body (heart, lungs, temperature, blood).  And with upgrades to noninvasive devices that can interface the mind by scanning the brain’s electrical activity for games and prosthetics, capturing and analyzing thought patterns is not too far in the future.

Remember when capturing and analyzing business transaction was a challenge?  How about all this activity?  The pipes required simply to capture the data are huge – petabytes per day huge for some organizations – let alone actually running analytics against the data once it has been filtered and landed.

What is the answer to building an architecture and support organization to handle this data monster?  It is not going to be as simple as “42.”