By Lou Washington

About 15 years ago I was listening to a fellow on the radio spouting off about the end of knowledge; more accurately, the end of new knowledge. He was predicting a new “Dark Age” and he was placing the blame for this pending catastrophe squarely on the internet.

His prediction was that the volume of information made readily available to the masses worldwide would somehow extinguish the level of original research conducted. The suggestion was that the internet would provide enough answers to enough questions that ongoing research would no longer be necessary. He suggested that the line between real research and search engine based internet queries was becoming blurred and people were often confusing one for the other.

This opinion was offered in the wake of journalist Pierre Salinger’s assertion that he had obtained “hard evidence” related to the downing of TWA Flight 800 off Long Island. Salinger had conducted some research on the issue and had come upon an internet based document that seemingly fixed blame for the crash on a friendly fire shoot down accident on the part of the U.S. Navy.

The document was soon determined to be bogus. Salinger, a veteran journalist and Washington DC insider had been taken in. It is interesting to note, that searching on this subject will still yield an incredible amount of grist for the conspiracy crowd.

I can remember using our family copy of the World Book Encyclopedia when I was growing up. I was struck by the fact that the book listed Dwight Eisenhower as the current president of the United States. I knew that was not correct and it made me wonder, what else in the encyclopedia was no longer valid.

The internet is in many ways like an aging encyclopedia. Documents, once published are difficult or impossible to entirely suppress. So, even truthful documents that have aged themselves into obsolescence are still there right alongside the current documents containing the current data. The researcher has to figure this out and learn to find what is new versus old, what is genuine versus counterfeit and what is truthful.

More recently, I was listening to a radio interview with a fellow who was writing about the Watergate investigation. This guy made the assertion that today this investigation would have never gotten off the ground. His suggestion was that the techniques used by Woodward and Bernstein to gather the incriminating evidence are no longer used by reporters.

Today, reporters rely on internet based information sources. The shoe leather journalism of reporters from the seventies and earlier is truly a thing of the past. I’m not making this assertion; this is what the interviewee was saying.

The Watergate fellow seemed to almost fulfill the prediction made by my radio friend of fifteen years past. Could it be? Are we really entering in the age of information stasis?

In short, the answer is, no! Maybe even, Hell No!

Consider the following quote taken from the IBM web site on Big Data:

“Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.”

That is an amazing statistic. 90% of the world’s data has been created in the past 24 months.

If anything, it would seem that our ability to massage, extract, organize, store and use data is woefully inadequate. One would think that Moore’s law would prevent us from ever catching up.  Instead of looking at price performance ratios in data storage and maintenance as being exemplary, we should be wringing our hands and pleading for more capacity, more speed, more access, more organization, more security, more everything.

What this means is there will be a need for radically new thinking in terms of how we store, index and retrieve data. How we synthesize information from the data we maintain. Additionally, we need something to measure the validity of the data we query.

I find myself going to the snopes.com site on a regular basis. Snopes is fine for the occasional rumor or wild stories that pop up now and then. But what about the rest of the stuff we wade through online?

For many people, a search engine and an internet connection are simply not enough. Businesses certainly need to be sure they are informed with accurate data from reliable sources.

We talk about business intelligence as a strategic necessity in the world of corporate data. The volume of data created today, drives that same level of urgency in other disciplines as well, even in journalism.

But still, I think Woodward and Bernstein would do just fine today. Rather than knocking on the doors of DC town houses, they would be browsing around in Face book. Rather than running down to Miami to speak with a witness, they would locate the guy in LinkedIn.  Perhaps they would have simply checked out the presidents Google+ circles, “. . . look Bob, Gordon Liddy and Howard Hunt are both in Bob Haldeman’s Plumber’s Circle!”

Almost certainly, the manual search through the thousands of circulation records would not have happened.  Today, they could easily see anyone’s favorite books by reviewing their Amazon Reading List. Certainly, the LOC could have simply supplied them with access to online circulation records.

It really is still a matter of not believing everything you read. It doesn’t matter if it came from a newspaper, an encyclopedia or a Google search. The reader must beware; they must be skeptical and seek confirmation of validity.

“. . . and ye shall know the truth, and the truth shall make you free”