5 THINGS PRIVACY EXPERTS WANT YOU TO KNOW ABOUT WEARABLES


There’s been a lot of news lately in the health and fitness wearables space. Apple just announced they’re releasing an app, called “Health,” as well as a cloud-based platform “Health Kit”. Somewhat related, Nike recently pulled the plug on its activity tracking Fuelband. The conventional wisdom is that fitness trackers are on the decline, while the wearables market in general —think Google Glass and the upcoming iWatch–is still waiting for its defining moment.

And on the privacy front?  In fact, there’s been a lot of movement there as well, and the FTC is all over it! They recently hosted a “Consumer Generated and Controlled Health Data” event and all the speakers – the FTC Commissioner, technologists, attorneys, privacy experts – agree that the potential of health-based wearables is huge, but since health data is so sensitive, it needs special protection.

I’ve distilled their privacy wisdom into 5 key things privacy experts want you to know about health data, the data generated from your wearables, your privacy and why it’s so hard to create one law that protects all.

1. Transparency and trust are essential

If health and fitness wearable makers create privacy policies that are ambiguous and don’t require consumer consent for data sharing, it may limit the benefits of these services for many people, especially those who are privacy conscious.  Why upload your health data when there’s no guarantee it will be kept private?

Some experts suggest short, clear-cut notices about the safety and protection of your data—something akin to a data nutrition label.

nutrition label

 2. Your health data gets around

Latanya Sweeney, the FTC Chief Technologist and Professor of Government and Technology at Harvard University, attempted to document and map all flows of data between patients, hospitals, insurance companies, etc. She learned that it’s not really clear where data is going and it’s difficult to know all the places it might wind up.

Inspired by Sweeney, I checked whether some heathcare data may find its way outside the medical ecosystem. It does! The recent FTC report on the data broker industry (see appendix B) proves the brokers collect some sensitive patient data points.

datamap

Source: DataMap

3. Discharge data in disarray

Information about your hospital visit is also known as discharge data. This data is required by state law to be sent to whomever is designated by that state law to receive that data.

What do states do with your discharge data? Turns out 33 states sell or share your discharge data. Of the 33 states, there are only 3 that are HIPAA compliant.

discharge data

Source: DataMap

4. Geolocation is not to be overlooked

One very important privacy matter mentioned at the FTC event was geolocation. Many health and fitness apps and wearables mine data about your running routes or when you’re at the gym.  Some apps may also be able to predict where you’re going to be at a certain time or predict when you’re not home.

geolocation-predictions

5. There’s no free lunch

In exchange for a freemium health and fitness app, you are sharing A LOT of data. That’s not unusual in the free app world, but medical data is not the same as sharing your list of favorite movies.

Some users might put trust in the maker of their heathcare app or device-maker, say, Nike, but not realize that by using the product they’re consenting to having their health information sold and resold to third parties that may not be as trustworthy.

Jared Ho, who is an attorney in the FTC’s Mobile Technology Unit tested 12 health and fitness apps and found that his data was sent to the developer’s website as well as 76 third parties–mostly advertising and analytics organizations.

Here’s what he found:

  1. 18 of the 76 third parties collected device identifiers such as unique device ID.
  2. 14 of the 76 third parties collected consumer-specific identifiers, such as user name, name and email address
  3. 22 of the 76 third parties received information about the consumers such as exercise information, meal and diet information, medical symptoms, zip code, gender and geo-location.

wearable-data-travels

No one can predict what will happen in the wearables market, but emerging business practices and technologies will inform and impact consumer privacy regulations as it remains a very hot topic. Concerns and worries such as who will and should have access to one’s personal health data and who has potential access to the data will no doubt remain a part of the discussion.

Advertisements

10 Reasons Why You Should Care About iBeacon


As you may already know, iBeacon was announced and released by Apple as part of iOS7 for the primary purpose to enhance location services and connect the physical with the digital. Currently all 250+ Apple retail stores in the US have rolled out iBeacon and have begun sending targeted messages to consumers as they shop around and enter their stores- and it is pretty cool I have tried it. What is interesting is that iBeacon has been gaining momentum back in the news in the past few months, a number of retailers and brands begun to market test the technology ranging everything from fast food chains, sporting stadiums to major banks- with no hints of it slowing down.  As adoption rates of the technology begin to grow, it is important to understand its impact and what lays ahead for the future of iBeacon technology for both consumers and marketers.

Below are 10 reasons why you should care about iBeacon:

  1. Consumers are increasingly expecting seamless digital to physical experiences and it will grow only further-with already Safeway,MLB stadiums,Starwood Hotels and even private toll roads ( just to name a few) are rolling out their own iBeacon supported integration.
  2. Both Apple and Paypal are exploring payments using BLE, encouraging developers to build apps that leverage the technology (Paypal beacon and Apple Passbook).
  3. There is an estimated total of 110million+ iphone 4s/5/5C sold around the  world-which has the latest BLE technology required to activate iBeacon.
  4. 20% of Android phone phones globally have Android 4.3 or later running (latest BLE technology required to activate iBeacon).
  5. The actual beacons  use low energy so they last an average of up to 3+ years and cost only  $100 for a set of three.
  6. Retail foot traffic is down almost 50% in the past three years in total across the US and expected to decrease even further.
  7. Digital sales are growing at the expense of offline sales- Ecommerce is projected to hit $1.2 trillion this year.
  8. 68% of smartphone owners have already enabled push notifications for previously installed apps.
  9. Roughly 50% of consumers download an app in order to/or expect to receive exclusive coupons or sales offers from that brand.
  10. Competing and custom alternative beacon support is already being introduced by numerous companies to support both Android and iphone and this industry is projected to grow even further in 2014.

 

Thanks to http://moesheikha.com/2014/04/28/top-10-reasons-why-you-should-care-about-ibeacon-technology/

 

WHAT DOES BIG DATA LOOK LIKE OVER TIME?


Last month at Gigaom Structure Data, there was an interesting panel of big data experts sharing innovative ways they’re using the millions, if not billions and even trillions of data points to build new products and services. This led me to consider that the volume of data we’re able to collect and store today is HUGE. In a single day on the Internet, PayPal processes over $315 million in transactions1, Facebook takes in about 350 million uploaded images2, and Twitter blasts out 400 million tweets from its users3. Multiply these numbers by 365, account for some data for each interaction, and you’re in the zettabyte range and beyond for total yearly storage. And that’s not even accounting for extra metadata!

Visualizing big data in dollars

When we talk about big data, just how big are the numbers? Let’s try visualizing dollars, a common and frequently used measurement of financial success. It might be relatively easy to imagine hundreds, thousands, and even millions, but what about billions? Or trillions? One trillion one dollar bills stacked reaches 68,000 miles into space, one-third of the way to the moon. In 2013, we made over 2 trillion searches4 on Google—so if these searches were one dollar bills we would be two-thirds of the way to the moon.

Let’s scale down time from a year to a day…

Last year when Twitter turned 7 years old, over 200 million active users were sending 400 million tweets per day. If you had 400 million dollars and spent a $1,000 every day, it would take you 1,096 years to spend all the money.

($400,000,000/$1,000)=400,000 days)/365 days = 1096 years
…to an hour

Facebook users share over 41 million pieces of content every hour5. If you had 41 million dollars and spent $1,000 every day, it would take you 112 years to spend all the money.

($41,000,000/$1,000)=41,000 days)/365 days = 112 years
…to a minute

Google receives over 2 million search queries a minute6. If you had 2 million dollars and spent a $1,000 every day, it would take you 5 years to spend all the money.

($2,000,000/$1,000)=2,000 days)/365 days = 5 years
Visualizing big data over time

Now that we know just how big the number is when we talk about big data, let’s visualize what big data looks like over a period of time as it might reveal new trends. I took three organizations – PayPal, Facebook and Twitter and charted out the dollars processed, number of photos uploaded, and tweets amassed per year. And here are the results:

big-data-over-time

Big data gets even bigger after 2009

PayPal, founded in 1998, had a steady 10-year-rise, but notice the steep line from 2009 to 2011 in dollars processed – which increased by 60%. Facebook’s steep line, representing the number of photos uploaded from 2011 to 2013 was a 73% increase. Twitter’s big data growth was even more extreme or, I should say, exponential. In 2009, there were about 10 billion tweets, and they’ve been more than doubling every year since then, reaching the 120 billion mark in 2012. Perhaps this spurt was pushed by Oprah’s blessing when she joined Twitter in 2009 but more likely certain network effects were taking over.

Dollars processed, photos uploaded and tweets are not remotely comparable, but what I’m attempting to demonstrate is that big data got really, really big over the past couple of years with a pure unstructured data broadcasting service – Twitter—leading the charge.

In fact, if this trend continues, PayPal, Facebook and Twitter’s big data numbers will help them get over half way to the moon, just like Google searches. And perhaps we need a new benchmark for total data—e.g., “we’ve reached 10 Twitters of storage”?

So why is big data significant?

Amassing data itself isn’t inherently valuable. What’s significant is the metadata that accompanies big data – enabling Facebook to suggest pages to like or for Twitter to recommend people or organizations for you to follow. These suggestions are powerful in that they provide positive user experience, yield important insights to nearly anything you can imagine, and most importantly, can be the basis for a revenue generating business model.

In a future post, we’ll explore the type of metadata these platforms might have and the innovation that is possible as a result. Stay tuned!

Sources:

1 http://techcrunch.com/2011/09/25/paypal-now-processing-315-million-in-payments-per-day/
2 http://www.businessinsider.com/facebook-350-million-photos-each-day-2013-9
3 http://www.washingtonpost.com/business/technology/twitter-turns-7-users-send-over-400-million-tweets-per-day/2013/03/21/2925ef60-9222-11e2-bdea-e32ad90da239_story.html
4 http://www.statisticbrain.com/google-searches/
5 http://mashable.com/2012/06/22/data-created-every-minute/
6 http://mashable.com/2012/06/22/data-created-every-minute/

Chart Sources:

http://en.wikipedia.org/wiki/Paypal

EXPLORE THE CONTENTS OF AN ENTIRE SHOPPING MALL LIKE A CONNECTED CLOSET [VIDEO]


Interesting video about the ‘inspiration corridor’ for helping you choose product mixing the ease of online shopping with the fun of shopping.

Created by DigitasLBi Paris and real estate investment company Klépierre, the Inspiration Corridor is a large booth that offers users a personalized digital shopping experience. Shoppers step into the booth and undergo a complete body scan, with the Inspiration Corridor taking note of their age, sex, and current ensembles. From there, it brings up digital displays of items located in that mall that you might be interested in, and could potentially buy on the spot.

Thanks to http://www.psfk.com/2014/04/inspiration-corridor-online-offline-shopping.html#!FjKrm

Your Big Data Is Worthless if You Don’t Bring It Into the Real World


In a generation, the relationship between the “tech genius” and society has been transformed: from shut-in to savior, from antisocial to society’s best hope. Many now seem convinced that the best way to make sense of our world is by sitting behind a screen analyzing the vast troves of information we call “big data.”

Just look at Google Flu Trends. When it was launched in 2008 many in Silicon Valley touted it as yet another sign that big data would soon make conventional analytics obsolete.

But they were wrong.

If the big-data evangelists of Silicon Valley really want to “understand the world” they need to capture both its (big) quantities and its (thick) qualities.

Not only did Google Flu Trends largely fail to provide an accurate picture of the spread of influenza, it will never live up to the dreams of the big-data evangelists. Because big data is nothing without “thick data,” the rich and contextualized information you gather only by getting up from the computer and venturing out into the real world. Computer nerds were once ridiculed for their social ineptitude and told to “get out more.” The truth is, if big data’s biggest believers actually want to understand the world they are helping to shape, they really need to do just that.

It Is Not About Fixing the Algorithm

The dream of Google Flu Trends was that by identifying the words people tend to search for during flu season, and then tracking when those same words peaked in the real time, Google would be able alert us to new flu pandemics much faster than the official CDC statistics, which generally lag by about two weeks.

Screen Shot 2014-04-10 at 2.33.09 PM

For many, Google Flu Trends became the poster child for the power of big data. In their best-selling book Big data: A Revolution That Will Transform How We Live, Work and Think, Viktor Mayer-Schönberger and Kenneth Cukier claimed that Google Flu Trends was “a more useful and timely indicator [of flu] than government statistics with their natural reporting lags.” Why even bother checking the actual statistics of people getting sick, when we know what correlates to sickness? “Causality,” they wrote, “won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning.”

But, as an article in Science earlier this month made clear, Google Flu Trends has systematically overestimated the prevalence of flu every single week since August 2011.

And back in 2009, shortly after launch, it completely missed the swine flu pandemic. It turns out, many of the words people search for during Flu season have nothing to do with Flu, and everything to do with the time of year flu season usually falls: winter.

Now, it is easy to argue – as many have done – that the failure of Google Flu Trends simply speaks to the immaturity of big data. But that misses the point. Sure, tweaking the algorithms, and improving data collection techniques will likely make the next generation of big data tools more effective. But the real big data hubris is not that we have too much confidence in a set of algorithms and methods that aren’t quite there yet. Rather, the issue is the blind belief that sitting behind a computer screen crunching numbers will ever be enough to understand the full extent of the world around us.

Why Big Data Needs Thick Data

Big data is really just a big collection of what people in the humanities would call thin data. Thin data is the sort of data you get when you look at the traces of our actions and behaviors. We travel this much every day; we search for that on the Internet; we sleep this many hours; we have so many connections; we listen to this type of music, and so forth. It’s the data gathered by the cookies in your browser, the FitBit on your wrist, or the GPS in your phone. These properties of human behavior are undoubtedly important, but they are not the whole story.

To really understand people, we must also understand the aspects of our experience — what anthropologists refer to as thick data. Thick data captures not just facts but the context of facts. Eighty-six percent of households in America drink more than six quarts of milk per week, for example, but why do they drink milk? And what is it like? A piece of fabric with stars and stripes in three colors is thin data. An American Flag blowing proudly in the wind is thick data.

A piece of fabric with stars and stripes in three colors is thin data. An American Flag blowing proudly in the wind is thick data.

Rather than seeking to understand us simply based on what we do as in the case of big data, thick data seeks to understand us in terms of how we relate to the many different worlds we inhabit. Only by understanding our worlds can anyone really understand “the world” as a whole, which is precisely what companies like Google and Facebook say they want to do.

Knowing the World Through Ones and Zeroes

Consider for a moment, the grandiosity of some of the claims being made in Silicon Valley right now. Google’s mission statement is famously to ”organize the world’s information and make it universally accessible and useful.” Mark Zuckerberg recently told investors that, along with prioritizing increased connectivity across the globe and emphasizing a knowledge economy, Facebook was committed to a new vision called “understanding the world.” He described what this “understanding” would soon look like: “Every day, people post billions of pieces of content and connections into the graph [Facebook’s algorithmic search mechanism] and in doing this, they’re helping to build the clearest model of everything there is to know in the world.” Even smaller companies share in the pursuit of understanding. Last year, Jeremiah Robison, the VP of Software at Jawbone, explained that the goal with their Fitness Tracking device Jawbone UP was “to understand the science of behavior change.”

These goals are as big as the data that is supposed to achieve them. And it is no wonder that businesses yearn for a better understanding of society. After all, information about customer behavior and culture at large is not only essential to making sure you stay relevant as a company, it is also increasingly a currency that in the knowledge economy can be traded for clicks, views, advertising dollars or simply, power. If in the process, businesses like Google and Facebook can contribute to growing our collective knowledge about of ourselves, all the more power to them. The issue is that by claiming that computers will ever organize all our data, or provide us with a full understanding of the flu, or fitness, or social connections, or anything else for that matter, they radically reduce what data and understanding means.

By claiming that computers will ever organize all our data, or provide us with a full understanding of the flu, or fitness, or social connections, or anything else for that matter, they radically reduce what data and understanding means.

If the big data evangelists of Silicon Valley really want to “understand the world” they need to capture both its (big) quantities and its (thick) qualities. Unfortunately, gathering the latter requires that instead of just ‘seeing the world through Google Glass’ (or in the case of Facebook, Virtual Reality) they leave the computers behind and experience the world first hand. There are two key reasons why.

To Understand People, You Need to Understand Their Context

Thin data is most useful when you have a high degree of familiarity with an area, and thus have the ability to fill in the gaps and imagine why people might have behaved or reacted like they did — when you can imagine and reconstruct the context within which the observed behavior makes sense. Without knowing the context, it is impossible to infer any kind of causality and understand why people do what they do.

This is why, in scientific experiments, researchers go to great lengths to control the context of the laboratory environment –- to create an artificial place where all influences can be accounted for. But the real world is not a lab. The only way to make sure you understand the context of an unfamiliar world is to be physically present yourself to observe, internalize, and interpret everything that is going on.

Most of ‘the World’ Is Background Knowledge We Are Not Aware of

If big data excels at measuring actions, it fails at understanding people’s background knowledge of everyday things. How do I know how much toothpaste to use on my toothbrush, or when to merge into a traffic lane, or that a wink means “this is funny” and not “I have something stuck in my eye”? These are the internalized skills, automatic behaviors, and implicit understandings that govern most of what we do. It is a background of knowledge that is invisible to ourselves as well as those around us unless they are actively looking. Yet it has tremendous impact on why individuals behave as they do. It explains how things are relevant and meaningful to us.

The human and social sciences contain a large array of methods for capturing and making sense of people, their context, and their background knowledge, and they all have one thing in common: they require that the researchers immerse themselves in the messy reality of real life.

No single tool is likely to provide a silver bullet to human understanding. Despite the many wonderful innovations developed in Silicon Valley, there are limits to what we should expect from any digital technology. The real lesson of Google Flu Trends is that it simply isn’t enough to ask how ‘big’ the data is: we also need to ask how ‘thick’ it is.

Sometimes, it is just better to be there in real life. Sometimes, we have to leave the computer behind.

Editor: Emily Dreyfuss

Source: http://www.wired.com/2014/04/your-big-data-is-worthless-if-you-dont-bring-it-into-the-real-world/

Infographic: The who, why and how of Big Data


Infographic: The who, why and how of Big Data

Thanks to http://www.forbes.com/sites/baininsights/2013/10/29/infographic-the-who-why-and-how-of-big-data/

The future of wearable Tech : Trends and major themes


The Future of Wearable Tech report identifies 10 trends and three major themes that point to the evolving form and function of wearable devices and their influence on the way we live, work and socialise. Enjoy!

The Evolution of Big Data Processing at Google (Video)


Google’s Daniel Sturman at the IEEE Computer Society’s “Rock Stars of Big Data” event, October 29, 2013. Great video well worth watching.

2014 Big Data Predictions from IDC and IIA – Time for the Crystal Ball


Both IDC and The International Institute of Analytics (IIA) discussed their big data and analytics predictions for 2014 in separate webcasts last week. Here is my summary of their predictions plus a few nuggets from other sources.

IDC predicts that the market for big data will reach $16.1 billion in 2014, growing 6 times faster than the overall IT market. IDC includes in this figure Infrastructure (servers, storage, etc., the largest and fastest growing segment at 45% of the market), services (29%) and software (24%). IDC commented that the benefits of big data are not always clear today (indeed, BNY Mellon recently asked its 50,000 employees “for ideas about how to harness the power of Big Data”). IIA predicted that companies will want to see demonstrable value in 2014 and will focus on embedding big data analytics in business processes to drive process improvement.

The much-discussed shortage of analytics and data science talent led IIA to make three separate but related predictions. One prediction is that the adoption of analytics-as-a-service will accelerate with “ready-made analytics in the cloud” offering an attractive option for quickly testing big data analytics or scaling up existing programs. Similarly, Capgemini predicts (in an email to me) “smaller, nimble analytics,” as a result of the rise of machine-to-machine data, “making cloud the de facto solution.” And IDC predicts that cloud infrastructure will be the fastest-growing sub-segment of the big data market, with a 2013-2017 CAGR of close to 50%.

Another IIA prediction related to the dearth of talent is the increasing attention paid by companies to organizing in teams the analysts and data scientists they currently have on board, either embedded in the business units or in a center of excellence. The focus will be on making these teams more effective by establishing and sharing best practices and by “operationalizing and managing models,” with the rest of the world getting closer to the proficiency level of the financial industry in this regard (in other words, keeping up with the quants? hopefully, also learning from the financial industry’s failures in this regard—see financial crisis, 2008 edition).

As for the prospects for alleviating the talent shortage, IIA commented that there are now well over 100 programs at universities in the US where analytics and data science “are in focus” (see my list of graduate programs here). IDC, for its part, cautioned that these programs “will bear fruit only in four to five years,” referring obviously to the newly-established data science programs. IDC agrees with IIA that companies providing big data analytics services will fill the gap in the meantime and predicts that the big data professional services market will exceed $4.5 billion in 2014. The number of vendors providing such services will triple over the next three years, according to IDC, and these firms will “aggressively acquire scarce big data talent,” making it scarcer.

A very interesting dimension to the dearth of talent raised by IDC is the shortage of IT professionals capable of dealing with the new big data requirements. 33% of respondents to an IDC and Computerworld survey earlier this year noted as one of their big data challenges the “lack of sufficiently skilled big data and analytics IT staff” (“lack of sufficient number of staff with appropriate analytics skills” was selected by 45% of respondents).

Also interesting was IDC’s expansion of the services part of the market to include “value added content providers.” These include “traditional vendors” such as Thompson, LexisNexis, and Experian; “new wave vendors” such as DataSift, Gnip, and LinkedIn; “company and personal information vendors” such as Acxiom, Equifax, and Tarsus; and “search engine/aggregators” such as Yahoo, Google, and Salesforce/Data.com. IDC believes that this market segment will be “challenged by lack of business model clarity and standards.”

A related prediction from IDC is that VC investment will shift to the top layers of the big data software stack, from information management to the “analytics & discovery” and “applications” layers. New types of applications (“use cases”), such as personalized medicine, will emerge out of what IDC predicts will be the blurring of the boundaries between high-performance computing (previously limited to scientific/engineering applications) and “enterprise big data” (i.e., mainstream applications managed by an IT department). IIA sees other new horizons for the application of big data, predicting that companies in a variety of industries will increasingly use analytics on the data they have accumulated to develop new products and services. GE has been the poster boy for this emerging trend, called “Analytics 3.0” by IIA, or “the digitization of everything” by me (you decide).

Another application, security, will become the next big front for big data, IDC predicts, as security infrastructure will increasingly take on big data-like attributes. Big data will be used to correlate log data and identify malicious activity in real time, allowing companies to react quickly, rather than after the event. Gartner begs to differ, however, predicting that “big data technology in security contexts will stay immature, expensive and difficult to manage for most organizations as targeted attacks become more stealthy and complex to identify in progress. … The noise about big data for security has grown deafening in the industry, but the reality lags far, far behind.”

In a somewhat far-out prediction, IIA talked about facial recognition and wearable device data that will be incorporated into predictive analytics. One of the examples given was “pet stores could use facial recognition to greet dogs as well as customers.” IDC was a bit closer to 2014 (or was it?) when it predicted that the “proliferation of sensor, mobile, wearable, and embedded devices (Internet of Things) will become a significant driver of the big data market,” stressing the need for investment in “Data-in-Motion” and “real-time analysis of geo-dispersed incoming data streams,” primarily in the cloud (that you don’t need wearables or geo-whatever to satisfy your obsession with quantifying your life, was recently demonstrated by the resident data scientist at MarkITx who crunched his lunches to come up with a happiness-per-gram metric).

Both IDC and IIA got a bit more into the technologies behind big data analytics, with IDC predicting the co-habitation for the foreseeable future (my words) of traditional database technology (RDBMS) with the newer Hadoop ecosystem and NoSQL databases, concluding that “in the short term,” information management will become more complex for most organizations (see shortage of qualified IT staff above); and IIA predicting that “the adoption of data visualization will accelerate in both the high and low ends of the complexity spectrum [for analytics].” Humans, however, don’t comprehend things in more than two dimensions or, at most, three dimensions, so IIA advised tempering our enthusiasm “a bit” (this came from self-described Tom “Curmudgeon” Davenport so you may want to consider how much tempering you want to do; as for me, I always opt for being “uber-curmudgeon”).

Last but certainly not least, both IDC and IIA talked about automation in the context of big data. IDC predicts that “decision and automation solutions, utilizing a mix of cognitive computing, rules management, analytics, biometrics, rich media recognition software and commercialized high-performance computing infrastructure [phew!], will proliferate.” Some of these solutions, IDC says (warns?), “will begin to replace or significantly impact knowledge worker roles.”  IIA predicts that “we will see a continued move to machine learning and automation to keep pace with speed and volume of data” and that “as they strive to operationalize analytics but encounter challenges with the over-automation of decisions, companies will focus more on the optimal mix between human and machine capability and judgment.” If you take humans too much out of the equation, their decision making will atrophy, warned IIA, asking “If you don’t have experts, who will train the next generation of [machine learning] software?” From the IIA’s lips, to the NSA’s ears, I say. (Well, we can assume these words were collected and stored by the omnipresent sleuths the second they were uttered; the question is: do they understand what the words mean?)

One prediction that didn’t make the official list of IIA’s predictions, but Davenport nevertheless managed to include in the webcast, was that “companies will need to hire lawyers to verify that they actually own the data.” Indeed, the nagging issues—that I think will be even more prominent in 2014—of privacy and governance were largely missing from the IDC and IIA discussions (Capgemini, in contrast, contributed this: “A heightened focus on governance will improve analytic results… Governance will need to be a driver in shaping the ROI story for Big Data in 2014”).  Also missing were discussions of “open data” and the increased use of big data by the public sector (outside of the NSA) to name just a few pertinent big data trends not on their list of predictions. But of course, the challenge is to select the nine or ten most important ones and we have lots to chew on with IDC’s and IIA’s lists.

Listeners to the IIA webcast were given the opportunity to vote on which predictions they thought would come true:

IIA_Poll-Results

Participants in the IIA webcast included Sarah Gates, Tom Davenport, Bob Morison, Bill Franks, Greta Roberts, Omer Sohail and Sanjeev Kumar; IDC’s webcast was delivered by Dan Vesset and Ashish Nadkarni; Capgemini’s predictions were attributed to SVP for Business Information Management Scott Schlesinger.

[Originally published on Forbes.com]