A couple of things occurred to me today.
1) Most of the research into Big Data concerns how to deal with it. But I haven't found many that describe it.
2) Given the amount of information that some of these things represent, folks are sticking to simple tasks. Word counts, proximities, relationship things. I wonder if there is value in applying the same principles to the wave of information being derived, then doing it again, then doing it again. All that can be happening while wave after wave of analysis happens on the original set, based on the secondary and tertiary analysis.
3) I wonder if there is a way to prepare data to become Big Data. Not just scanning text into the computer, but taking some of this initial parsing and using it as metadata. Of course, the metadata becomes big enough, you face the same problems.
The shape of things is still amorphous and out of reach. I am going to take some time at the library tomorrow, and see if I get a better grip on things from there.
No comments:
Post a Comment