Before 2010, managing data was a relatively simple chore: Online transaction processing systems supported the enterprise’s business processes, operational data stores accumulated the business transactions to support operational reporting, and enterprise data warehouses accumulated and transformed business transactions to support both operational and strategic decision making.
The typical enterprise now experiences a data growth rate of 40 to 60 percent annually, which in turn increases financial burdens and data management complexity. This situation implies that the data themselves are becoming less valuable and more of a liability for many businesses, or a low-commodity element.
Nothing could be further from the truth. More data mean more value, and countless companies have proved that axiom with Big Data analytics. To exemplify that value, one needs to look no further than at how vertical markets are leveraging Big Data analytics, which leads to a disruptive change.
For example, smaller retailers are collecting click-stream data from web site interactions and loyalty card data from traditional retailing operations. This point-of-sale information has traditionally been used by retailers for shopping basket analysis and stock replenishment, but many retailers are now going one step further and mining the data for a customer buying analysis. Those retailers are then sharing those data (after normalization and identity scrubbing) with suppliers and warehouses to bring added efficiency to the supply chain.
Another example of finding value comes from the world of science, where large-scale experiments create massive amounts of data for analysis. Big science is now paired with Big Data. There are far-reaching implications in how big science is working with Big Data; it is helping to redefine how data are stored, mined, and analyzed. Large-scale experiments are generating more data than can be held at a lab’s data center (e.g., the Large Hadron Collider at CERN generates over 15 petabytes of data per year), which in turn requires that the data be immediately transferred to other laboratories for processing—a true model of distributed analysis and processing.
Other scientific quests are prime examples of Big Data in action, fueling a disruptive change in how experiments are performed and data interpreted. Thanks to Big Data methodologies, continental-scale experiments have become both politically and technologically feasible (e.g., the Ocean Observatories Initiative, the National Ecological Observatory Network, and USArray, a continental-scale seismic observatory).
Much of the disruption is fed by improved instrument and sensor technology; for instance, the Large Synoptic Survey Telescope has a 3.2-gigabyte pixel camera and generates over 6 petabytes of image data per year. It is the platform of Big Data that is making such lofty goals attainable.
The validation of Big Data analytics can be illustrated by advances in science. The biomedical corporation Bioinformatics recently announced that it has reduced the time it takes to sequence a genome from years to days, and it has also reduced the cost, so it will be feasible to sequence an individual’s genome for $1,000, paving the way for improved diagnostics and personalized medicine.
The financial sector has seen how Big Data and its associated analytics can have a disruptive impact on business. Financial services firms are seeing larger volumes through smaller trading sizes, increased market volatility, and technological improvements in automated and algorithmic trading.
Taken from : Big Data Analytics: Turning Big Data into Big Money
0 comments:
Post a Comment