One of the things it seems to mean is that good, accurate science reporting is not a high priority.
Junk DNA is one of those subjects that seem to bamboozle science journalists. They just can't seem to accept the possibility that much of our genome serves no purpose. One of the most extreme examples of this bias can be found in an article by Veronique Greenwood titled What We Lose.
The point of the article is that scientific models aren't perfect. They often over-simplify and, even more dangerous, they can exclude the very information required to refute the model. The example she uses is the software that will select what data to look at when the Large Hadron Collider (LHC) starts working. Greenwood's point is that the software might ignore the most interesting collisions because they aren't what scientists expect.
Here's how she explains the danger.
It wouldn't be the first time a standing model has excluded data that could revise it. In the 1980's, NASA analyses of the ozone layer flagged a great many data points as errors—values that seemed too low to be real, values that indicated a huge hole in the planet's protective layer. NASA scientists overlooked the possibility until an outside group published its discovery of the ozone hole in 1985. ....THEME
Something similar happened in the 1990's when DNA that didn't code for proteins was labeled "junk." Noncoding DNA, biologist have since found, regulates protein-coding DNA.
Genomes & Junk DNA
No, Ms. Greenwood, that's not what happened. Junk DNA has been around for 35 years and it is well-established that much of our genome is composed of degenerate transposons and pseudogenes. There's good evidence that up to 90% of our genome may be junk, perhaps more.
Regulatory sequences have been known for over forty years. They cannot account for more than a small fraction of noncoding DNA. You are dead wrong when you claim that a function has been found for most junk DNA.
No comments:
Post a Comment