Will the Data Deluge Makes the Scientific Method Obsolete?
Chris Anderson, the editor in chief of Wired Magazine, wrote last week an article that you find at the Edge proclaiming
The End of Theory: The Data Deluge Makes the Scientific Method Obsolete
Anderson claims that our progress in storing and analyzing large amounts of data makes the old-fashioned approach to science – hypothesize, model, test – obsolete. His argument is based on the possibility to analyze data statistically with increasing efficiency, for example online behavior: “Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.”
Source:http://www.edge.org/3rd_culture/anderson08/anderson08_index.html
===============
THE END OF THEORY
Will the Data Deluge Makes the Scientific Method Obsolete? [6.30.08]
By Chris Anderson
Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.
The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.
Introduction
According to Chris Anderson, we are at "the end of science", that is, science as we know it." The quest for knowledge used to begin with grand theories. Now it begins with massive amounts of data. Welcome to the Petabyte Age."
"At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later."
In response to Anderson's essay, Stewart Brand notes that:
Digital humanity apparently crossed from one watershed to another over the last few years. Now we are noticing. Noticing usually helps. We'll converge on one or two names for the new watershed and watch what induction tells us about how it works and what it's good for.
The "crossing" that Anderson has named in his essay, has been developing in science for several years and in the Edge community in particular.
For example, during the TED Conference in 2005, before, and during, the annual Edge Dinner, there were illuminating informal conversations involving Craig Venter (who pioneered the use of high volume genome sequencing using vast amounts of computational power), Danny Hillis (designer of the "Connection Machine", the massively parallel supercomputer), and Sergey Brin and Larry Page of Google: new and radical DNA sequencing techniques meet computational robots meet server farms in search of a synthetic source of energy.
And in August, 2007, at the Edge event "Life: What A Concept", Venter made the following point:
I have come to think of life in much more a gene-centric view than even a genome-centric view, although it kind of oscillates. And when we talk about the transplant work, genome-centric becomes more important than gene-centric. From the first third of the Sorcerer II expedition we discovered roughly 6 million new genes that has doubled the number in the public databases when we put them in a few months ago, and in 2008 we are likely to double that entire number again. We're just at the tip of the iceberg of what the divergence is on this planet. We are in a linear phase of gene discovery maybe in a linear phase of unique biological entities if you call those species, discovery, and I think eventually we can have databases that represent the gene repertoire of our planet.
One question is, can we extrapolate back from this data set to describe the most recent common ancestor. I don't necessarily buy that there is a single ancestor. It’s counterintuitive to me. I think we may have thousands of recent common ancestors and they are not necessarily so common.
Andrian Kreye, editor of the Feuilleton of Sueddeutsche Zeitung wrote on his paper's editorial pages that the event was "a crucial moment in history. After all, it's where the dawning of the age of biology was officially announced".
In the July/August Seed Salon with novelist Tom Wolfe, neuroscientist Michael Gazzaniga explains how the conversation began to change when neuroscience took off in the '80s and '90s:
There was a hunger for the big picture: What does it mean? How do we put it together into a story? Ultimately, everything's got to have a narrative in science, as in life.
Source:http://www.edge.org/3rd_culture/anderson08/anderson08_index.html
The End of Theory: The Data Deluge Makes the Scientific Method Obsolete
Anderson claims that our progress in storing and analyzing large amounts of data makes the old-fashioned approach to science – hypothesize, model, test – obsolete. His argument is based on the possibility to analyze data statistically with increasing efficiency, for example online behavior: “Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.”
Source:http://www.edge.org/3rd_culture/anderson08/anderson08_index.html
===============
THE END OF THEORY
Will the Data Deluge Makes the Scientific Method Obsolete? [6.30.08]
By Chris Anderson
Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.
The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.
Introduction
According to Chris Anderson, we are at "the end of science", that is, science as we know it." The quest for knowledge used to begin with grand theories. Now it begins with massive amounts of data. Welcome to the Petabyte Age."
"At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later."
In response to Anderson's essay, Stewart Brand notes that:
Digital humanity apparently crossed from one watershed to another over the last few years. Now we are noticing. Noticing usually helps. We'll converge on one or two names for the new watershed and watch what induction tells us about how it works and what it's good for.
The "crossing" that Anderson has named in his essay, has been developing in science for several years and in the Edge community in particular.
For example, during the TED Conference in 2005, before, and during, the annual Edge Dinner, there were illuminating informal conversations involving Craig Venter (who pioneered the use of high volume genome sequencing using vast amounts of computational power), Danny Hillis (designer of the "Connection Machine", the massively parallel supercomputer), and Sergey Brin and Larry Page of Google: new and radical DNA sequencing techniques meet computational robots meet server farms in search of a synthetic source of energy.
And in August, 2007, at the Edge event "Life: What A Concept", Venter made the following point:
I have come to think of life in much more a gene-centric view than even a genome-centric view, although it kind of oscillates. And when we talk about the transplant work, genome-centric becomes more important than gene-centric. From the first third of the Sorcerer II expedition we discovered roughly 6 million new genes that has doubled the number in the public databases when we put them in a few months ago, and in 2008 we are likely to double that entire number again. We're just at the tip of the iceberg of what the divergence is on this planet. We are in a linear phase of gene discovery maybe in a linear phase of unique biological entities if you call those species, discovery, and I think eventually we can have databases that represent the gene repertoire of our planet.
One question is, can we extrapolate back from this data set to describe the most recent common ancestor. I don't necessarily buy that there is a single ancestor. It’s counterintuitive to me. I think we may have thousands of recent common ancestors and they are not necessarily so common.
Andrian Kreye, editor of the Feuilleton of Sueddeutsche Zeitung wrote on his paper's editorial pages that the event was "a crucial moment in history. After all, it's where the dawning of the age of biology was officially announced".
In the July/August Seed Salon with novelist Tom Wolfe, neuroscientist Michael Gazzaniga explains how the conversation began to change when neuroscience took off in the '80s and '90s:
There was a hunger for the big picture: What does it mean? How do we put it together into a story? Ultimately, everything's got to have a narrative in science, as in life.
Source:http://www.edge.org/3rd_culture/anderson08/anderson08_index.html
0 Komentar:
Posting Komentar
Berlangganan Posting Komentar [Atom]
<< Beranda