Tuesday, March 26, 2013

Doug Axe Doesn't Understand Information Theory

]

Here we have the Discovery Institute's favorite biologist, Doug Axe, demonstrating his ignorance of information theory:

"... So, really, you put all that together, we now understand something about digitally-encoded information in cells, encoded in the genome. We understand why it's there: to encode proteins. And we understand how the proteins function to do the chemistry of life. And we also have the ability to measure, to some degree, how much information is there. If you put all that together, we now see something that looks very much like human designs, where we use digitally-encoded information to accomplish things, and we know that it's impossible to get information on that scale through a chance process that Darwinism employed."

This is false. We don't "know" any such thing. Axe cannot point to a single paper in the peer-reviewed literature that correctly explains why one can't "get information on that scale through a chance process that Darwinism employed". This is just something that creationists repeat over and over again without real justification.

In fact, just the opposite is true. Ironically, I am lecturing about Kolmogorov's theory of information today in my class CS 462. In that class we show that it is, in fact, while it is possible to produce information through a deterministic process (for example, by iterating the map xxx), it is even easier to produce as much information as you like through a random process -- precisely the opposite of what Axe is claiming.

"I remember thinking at the time that this looks like something, not just the product of engineering but the product of brilliant engineering. And that was the point where it occurred to me that someone needed to do the experiments to test whether that was really the case or not."

No experiment that Axe has done has tested the question of whether life occurs through the process of "brilliant engineering" or not. No one has a testable definition of "brilliant engineering" and no one has a procedure to test whether something is "brilliant engineering". Wes Elsberry and I gave eight challenges related to this kind of claim back in 2003. Ten years later, and not a single creationist has taken up our challenges.

We recognize human engineering because we are good at recognizing artifacts: the characteristic products of human activity.

"It's strange how your preconceptions really color the way you process data. And some people just went along with what they were taught, and I never tended to do that. I was always questioning what I was taught, including Darwinism."

And of course, creationists are miraculously free of preconceptions. That's what they're known for!

Here are a few other conventional ideas Axe has rejected:

  1. It's not a great idea to publish your papers in a vanity journal where you yourself are the managing editor.
  2. If you're a scientist, it's not a great career move to work for a "scientific" institute that gets most of its funding from the Discovery Institute --- a group with a documented history of misrepresentations, and driven by religious and political goals.
  3. It's not a great idea to have your colleagues extol the brilliance of your work, especially when referring to papers that have received few, if any, citations.
But hey, just go right ahead and ignore those conventions. You're a questioner, right?

"If you believe that everything was cobbled together through random processes, then there would be a lot of junk, there'd be the residue of cobbling sitting there and that's why people jumped to this junk DNA hypothesis. They found out that a very small fraction of the genome actually encodes proteins --- that was the one aspect of genomes that we understood well, is that they encode proteins --- so they assumed all the rest of it is junk. Well, the truth is, we didn't know what the rest of it was doing, but that doesn't mean it's junk. And it's becoming increasing clear that it isn't junk, and that's a significant prediction. It's not a prediction that Darwin himself made, but it follows very readily and naturally from Darwinism, and it turns out not to be correct. And that's becoming increasingly clear."

Axe misrepresents the history. Junk certainly could arise from an evolutionary algorithm, but it need not. It's logically possible that junk could have such a high evolutionary cost that it would tend to be weeded out. Acceptance of junk DNA came from data, not just theory. If you maintain that there is little or no junk in the genome, you have to explain exactly why different species of Allium have such wildly different genome sizes.

Axe likes to claim that he questions everything. But he hasn't questioned the ENCODE claims, even though they've been widely criticized. I guess that's due to his miraculous lack of preconceptions.

47 comments:

Diogenes said...

What a lying piece of shit! Are all IDers pathological liars?

Axe: "They found out that a very small fraction of the genome actually encodes proteins --- that was the one aspect of genomes that we understood well, is that they encode proteins --- so they assumed all the rest of it is junk."

Lying piece of shit! Molecular biologists have known since the 1950's that non-coding DNA can be functional! The Nobel Committee has handed out a shelf-full of prizes to scientists (none of them creationists) for finding functions in non-coding DNA!

How many times will these pathological liars repeat this lie?

Do we need to remind this lying piece of shit of the WHOLE HISTORY of molecular fucking biology!?

Nobel Prize for Jacques Monod and co-workers, 1965, for finding functions in non-coding DNA (regulatory elements).

Nobel Prize for Barbara McClintock in 1983 for her discovery of new functions in non-coding RNA (mobile genetic elements.)

Nobel Prize for Tom Cech and Sidney Altman in 1989, for discovery of catalytic functions resulting from non-coding DNA (catalytic RNA= ribozymes).

Nobel Prize for Jack Szostak and co-workers in 2009, for research in 1980’s on function in non-coding DNA (telomeres).

Nobel Prize for Richard Roberts and Phillip Sharp in 1993 for discovering introns (in non-coding DNA). [http://www.nobelprize.org/nobel_prizes/medicine/laureates/1993/press.html]

The structure of tRNA was known by 1964, crystal structure solved in 1974. tRNA is made from non-coding RNA.

The ribosome was known to be largely nucleic acid in the 1950's, general molecular structure known since the early 1970s, by the 1980's it was known the ribosome was a ribozyme-- based on functions residing in non-coding DNA.

Explain to me how scientists did not pay enough attention to function in non-coding DNA?

Meanwhile, the budge for Axe's Biologic Institute is about $300,000 a year. In the last four years, that's $1.2 million. For that much, how many nucleotides of non-coding DNA did they discover to have a novel function?

ZERO.

Diogenes said...

Axe: "And it's becoming increasing clear that it isn't junk, and that's a significant prediction. It's not a prediction that Darwin himself made, but it follows very readily and naturally from Darwinism, and it turns out not to be correct. And that's becoming increasingly clear."

BULLSHIT. What's "increasingly clear" is that after spending $400 million dollars, ENCODE could not find actual function in more than 10% of the genome-- if by function you mean contributes to the fitness of the host genome.

Here are some references to ENCODE's "death of Junk DNA" being debunked in the peer-reviewed literature:

1. The C-value paradox, junk DNA and ENCODE. Sean Eddy. Current Biology, Volume 22, Issue 21, R898-R899, 6 November 2012. doi:10.1016/j.cub.2012.10.002. http://www.cell.com/current-biology/abstract/S0960-9822%2812%2901154-2. Preprint: http://selab.janelia.org/publications/Eddy12/Eddy12-preprint.pdf

2. Can ENCODE tell us how much junk DNA we carry in our genome? Niu DK, Jiang L. Biochem Biophys Res Commun. 2013 Jan 25;430(4):1340-3. doi: 10.1016/j.bbrc.2012.12.074. Epub 2012 Dec 22. http://www.ncbi.nlm.nih.gov/pubmed/23268340

3. “On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE.” Dan Graur, Yichen Zheng, Nicholas Price, Ricardo B. R. Azevedo, Rebecca A. Zufall and Eran Elhaik. Genome Biology and Evolution Advance Access. February 20, 2013 doi:10.1093/gbe/evt028. http://gbe.oxfordjournals.org/content/early/2013/02/20/gbe.evt028.short?rss=1

4. Is junk DNA bunk? A critique of ENCODE. Doolittle, W.F. (2013) Proc. Natl. Acad. Sci. (USA) published online March 11, 2013. [PubMed] [doi: 10.1073/pnas.1221376110]

Curt Cameron said...

There's a new post over at Panda's Thumb:

Stephen Meyer needs your help

Meyer, in his last book, said that the best explanation for the information content of DNA is intelligence. However, in listening to his interviews, I was never sure whether he was referring to the DNA sequence of any life form (its information content not being able to grow with time), or maybe he meant the methodology of the DNA replication and how it uses the four nucleotides.

It sounds like for his new book, it's the latter - he is trying to use his information theory misunderstanding to cast doubt on the natural explanation for the Cambrian explosion.

Joe Felsenstein said...

Meyer, in his previous statements, has always left it carefully ambiguous whether he is talking about the origin of the DNA machinery or the subsequent change of the message in the DNA.

I once saw him debate against an evolutionary biologist. He announced that Digital Information has been found in the genome. The audience, mostly creationists, was wowed. Now they had been informed that what was in the genome was not just protein-coding genes, RNA-coding genes, control sequences and junk, but Something Else. Namely Digital Information! Clearly that was a sign of Design!

Of course if he had been pressed on it he would have to admit that what he meant by the Digitial Information was the protein coding genes, etc. Or else he would have said he was talking about the DNA machinery itself. He leaves that unclear.

kereng said...

"Ironically, I am lecturing about Kolmogorov's theory of information today in my class..."
The Kolmogorov complexity is not the only way to define Information.
Hazen and Szostak suggested to measure information concerning a function in living beings by the likelihood of acquiring that function.
I'm afraid, Axe uses Werner Gitt's definition of information, which requires a purposeful sender.

Jeffrey Shallit said...

Werner Gitt has no coherent definition of information.

Where can we find Axe's proof that information in the Hazen-Szostak sense cannot be generated by an evolutionary algorithm?

John Stockwell said...

Non linear systems that create information are as close as the musical sound of a dripping faucet. Perhaps intelligent design theorists will invoke the God of the Drips to explain that.

John Pieret said...

I remember thinking at the time that this looks like something, not just the product of engineering but the product of brilliant engineering. And that was the point where it occurred to me that someone needed to do the experiments to test whether that was really the case or not.

Except, of course, when it isn't, and poor Casey Luskin is reduced to mewling that the bad design of the Ford Pinto was still design:

http://scienceblogs.com/tfk/2006/11/20/nomination-for-stupidest-man-a/

It's strange how your preconceptions really color the way you process data.

Which is just the "presuppositions" "argument" every YEC uses to explain away inconvenient facts:

http://dododreams.blogspot.com/2009/02/err-presumptive.html

Walks like a duck ...

Unknown said...

" Axe cannot point to a single paper in the peer-reviewed literature that correctly explains why one can't get information on that scale through a chance process that Darwinism employed".
The problem is searching the domain of possible amino acid sequences. To find a specific protein of modest size (150 amino acids long) is beyond plausibility.
For example: As Stephen Meyer shows in his old book, the maximum number of events that could have occurred since the big bang is about 10 to the power 150.
(planck time (10pow43), times seconds elapsed (10pow17), times particles in known universe(10pow89). This is an absurdly generous upper limit. Compare this to the number of configurations of the domain of our modest length protein. There are 20 amino acids used in life. The possible arrangements for a protein of this length is thus 20 to the power 150, or ten to the 195. Thus this space would be unsearchable even if all sub-atomic particles in the known universe were exchanged for amino acids, each binding with another amino acid once per planck unit of time, from the big bang till now. (Also assumes the same amino acid chain never gets tried twice, and that they grow to 150 in length and no further!).
Please note this example doesn't account for all amino acids in life being of the left-handed form (2 to the power 150), or that they have to form peptide bonds rather than non-peptide bonds.
Now, as said above an 150 length amino-acid chain is modest, some are far bigger. Polymerase is three thousand amino-acids long, and is used in the ribosome during protein synthesis, so important to life. Note were dealing with exponents, 20 to the power 3000 is a huge number!
Obviously this is not the final word and nobody knows the composition of the domain space in terms of potentially functional proteins (that fold and are stable), nor how many different ways there are to much some molecular machine that will perform a given function etc. It does however show that the scale of the problem is likely to be too big for chance to be plausible.

Jeffrey Shallit said...

The problem is searching the domain of possible amino acid sequences.

Sorry, but that is not relevant to the question, which is how information is generated through random processes.

To find a specific protein of modest size (150 amino acids long) is beyond plausibility.

This is a basic error of reasoning. Who says that one needs to find "a specific protein"? It's like computing the probability that you would be born based on the probability that your parents, their parents, and their grandparents, etc., back 30 generations happen to meet and the particular sperm met the particular egg. Obviously this probability is astronomically small; yet here you are.

Jeffrey Shallit said...

Please note this example doesn't account for all amino acids in life being of the left-handed form (2 to the power 150

Oh, come on. Why do you think these events have to be independent, as they would have to be for you to multiply probabilities like that?

Unknown said...

“Sorry, but that is not relevant to the question, which is how information is generated through random processes.”

I think it’s indirectly relevant because if you can’t plausibly get there by luck then you would need to be guided by information. Also, a goal of Axe’s work was to provide estimates of the percentage of sequences in the domain that would fold into stable, potentially functional proteins. Though that isn’t stated in the video.

“This is a basic error of reasoning. Who says that one needs to find "a specific protein"? It's like computing the probability that you would be born based on the probability that your parents, their parents, and their grandparents, etc., back 30 generations happen to meet and the particular sperm met the particular egg. Obviously this probability is astronomically small; yet here you are.”

I think the truth is somewhere in the middle. In the sperm example any sperm will do. It’s not the same here. Firstly, as above, not all amino acid chains will fold into a stable form. Secondly, all sperm do the same job whereas proteins have to do a variety of jobs, requiring the protein to have certain properties (shape, charge, etc). For multiple of proteins to work together to do a job you will clearly rule out many more proteins than can combine in the right way to build a structure or perform a function.

“Oh, come on. Why do you think these events have to be independent, as they would have to be for you to multiply probabilities like that?”

There is no chemical reason for one over the other as far as I know. For arguments sake, say all the amino acids in the prebiotic soup were left handed, you still have a collective domain space of the range of lengths of proteins that’s vastly bigger than the largest conceivable physical referent.

Thank you for replying, I realize this is quite an old thread!

Anonymous said...

Jeffrey,

You are right in saying that information can be generated by random processes. However, how fast in practice is mathematically fast, is another question. This raises the question of statistical plausibility. Hazen information i.e. non-trivial function cannot be plausibly generated by stochastic and law like (selection) factors alone in the time bounds of natural terrestrial history.

In order to disprove that practical observation, you need to demonstrate the opposite, i.e. you need to demonstrate that a halting program can plausibly emerge from chaos in pretty much an order of 10^17 seconds given all natural processes and constraints.

I think that every time you want to prove somebody else is ignorant, you have to be extremely careful.

Jeffrey Shallit said...

you need to demonstrate that a halting program can plausibly emerge

You have no idea what you're talking about, do you? A "halting program", indeed. Try more comprehension and less babble.

Go read J. R. Koza, "Artificial life: spontaneous emergence of self-replicating and evolutionary self-improving computer programs", in Artificial Life III, C. G. Langton, ed., 1994, pp. 225-262, and come back when you've understood it.

Anonymous said...

Thank you for this reference and especially for your kindness.

Can I also suggest you read David Abel "The First Gene".

Anonymous said...

Jeffrey,

Thank you very much for the reference and for your exceptional politeness.

I am familiar with the works of Stuart Kauffman and Gregory Chaitin on this. Kauffman's work predated the work you recommended. Kauffman ended up questioning whether there is any law governing the emergence of life. Chaitin ended up deciding himself what program will and what won't work.

Mathematics is a language: whatever you assume will determine your results. Whether or not the assumptions are ludicrously unrealistic is immaterial to it. But in order to stay practical you need to always question your assumptions.

I maintain that your models guys need to be realistic. You guys need to learn to listen.

Jeffrey Shallit said...

I've read David Abel. His work is utter nonsense and completely without any value. I cannot think of a single serious scientist who thinks he has anything to say. There is a reason why his work is, for all practical purposes, only cited by creationists. You can check the literature yourself.

The fact that you would proffer Abel strongly suggests certain lack of discernment on your part. You need to be able to distinguish between real science published in real journals by real scientists, and transparent bullshit.

Anonymous said...

Who is then a serious scientist in your estimation? What about Niels Bohr or Albert Einstein or Max Born or Kurt Goedel? Is this real science?

Or maybe Mike Ruse who says life originated on the back of crystals?

You see, I am a practitioner and I did some research in the past. Science as such is just a formalization of human reasoning and therefore it is completely neutral to metaphysical questions. If somebody comes and says to me: "I can prove or disprove scientifically that your worldview is worthless", I will treat it as nonsense. No claims of this sort can be done on purely scientific grounds. This follows from the definition of science.

On the other hand, whatever scientific assumptions you start out with determines your results to a great extent.

Anyway, thanks a lot again for the reference. It looks like its not free access. I will try to get it when I am back to work from vacation.

Jeffrey Shallit said...

"Or maybe Mike Ruse who says life originated on the back of crystals?"

You seem quite confused. Michael Ruse is a philosopher, not a scientist, and he is not an expert on the origin of life. You probably meant Cairns-Smith, although how you could confuse Ruse with Cairns-Smith is beyond me.

Cairns-Smith does not assert that "life originated on the back of crystals". Rather, he put forth a speculative hypothesis about this idea. If you cannot distinguish between a speculative hypothesis and a positive assertion, I feel like your powers of discernment (again) need some improvement.

Yes, Niels Bohr and Albert Einstein and Max Born were all serious scientists. Goedel was not; he was a mathematician.

David Abel is a retired veterinarian who publishes the same incoherent drivel over and over again in (generally-speaking) venues of very very low quality. The fact that you cannot distinguish between Abel and people like Bohr, Einstein, and Born, suggests a certain lack of discernment on your part.

Anonymous said...

I admit the possibility of me lacking discernment in this. Thanks for agreeing Niels Bohr was a scientist. That is a relief ) But why discriminate between a scientist and a mathematician is beyond me )

In my opinion, Abel raises important questions regarding function, order and randomness.

Charles Darwin was a geologist as far as I remember (and again as far as I remember he did not hold any scientific degrees at the time of publishing his magnum opus, correct me if I am wrong), which did not stop him doing work in biology. By the way, his work at first raised lots of mockery on him by fellow scientists.

Anyway, it's getting late here. Thanks for your answers.

Jeffrey Shallit said...

why discriminate between a scientist and a mathematician is beyond me

They study different things and use different methods. At our university, mathematics is not in the science faculty. Of course, this classification is not universal and some disagree.

In my opinion, Abel raises important questions regarding function, order and randomness.

But, judging from the number of citations to his work, actual biologists and physicists and mathematicians don't seem to agree. Frankly, from reading his work, it seems like he doesn't really understand the subject very well. I teach Kolmogorov complexity in my 4th year course here at Waterloo, and I doubt Abel could solve the homework assignments I give.

his work at first raised lots of mockery on him by fellow scientists

It did, but Darwin was also praised and well-studied and cited immediately. Not true for Abel, who submits to venues of very low quality, writes in an obscure and bafflegab manner, doesn't present his results at peer-reviewed conferences, publishes his own books, and doesn't get cited by anyone.

Again, more discernment is needed.

Unknown said...

I would really like to see the example of a random process producing information easily and in copius amounts and please, do not include footprints because that is not the type of information that is in DNA.Information occurs after the data is processed and that data are symbols that represent something. If the processor does not understand them , then information is meaningless or we can say that it does not exist.

When random data does produce information and easily is when the processor is built to handle the information that is currently off domain. If the processor cannot handle it, then that is not an information because that is processed into nothing.

Bringing Kolmogorov here is rather dubious because that is not what Doug is talking about. Sorry, but you are shattering a strawman.

Jeffrey Shallit said...

An example was right there in my blog post: weather. Every day we somehow make weather predictions based on information gathered from the environment. Weather is a chaotic process that produces information all the time.

I don't have any idea what you are referring to when you are talking about "footprints".

It is incorrect to say "information occurs after the data is processed", but if you want to maintain this, then you have defeated the claim of Dembski et al. that algorithms cannot create information.

I realized Axe is not talking about Kolmogorov information. But that simply shows that what he calls information is not the same as what actual scientists call information.

Unknown said...

Hey! Thanks for responding. Wasn’t sure that you will respond considering the blog post is very old. Now, let us jump straight to the point. I asked you to give me an example of a random process producing information in copious amounts and easily and you said weather. I agree with you that it is a correct example. But check above, I explicitly mentioned that it should not be a footprint. What I meant to say there? Well, we can have information from anything, like from a river cutting through rocks, forming plains, depositing alluvial on the bank. This information is however about the very process that produced it. The information we derive from weather is what the weather currently is, or was a few seconds ago. We place equipment, for example, an anemometer, and when the wind blows, we collect data about the wind.
An event happens, like you walking on a beach, and we gather data about the event. We contextualize, categorize it and that is information. This information is not about something else but about the very thing that happened. This is a footprint. This is not what is in DNA, and I am not sure how many times I have to emphasize it. We are not looking at DNA and trying to figure out what happened in past so that we have ACTG, and not CTAG. We are trying to figure out what their arrangement means.

Before I go any further, let us define “data” and “information” to make sure we are on the same page. I really like this definition-
Data - discrete, objective facts about events
Information - data transformed by the value-adding processes of contextualization, categorization, calculation, correction and condensation.
(Davenport and Prusak (1998))
Or this -
Information - data are presented in a particular way in relation to a particular context of action
(Newell et al. (2002))

Tell me, if you agree with them.
Now, one of the reasons we ought to distinguish “footprints” from the other type of information, we will discuss shortly what it is, is because otherwise, we would start thinking that what information Meyer and Doug and Dembski are talking about can be linked to virtually everything in Universe. The moment you’d be reading my comment, you would be wearing something, you’d had eaten something, probably, you would be awake, asleep, happy, sad, bored. A person looking at you can deduce information about you, like suppose Sherlock Holmes does, that what you ate, whether you slept properly, have you had a shower yet? The previous and current events/actions, leave indicators, akin to fingerprints left after a crime, and that is what I mean by “footprints”.
The jam stain that you might have had on your shirt a week before didn’t tell us who would be the next President of USA but whether you ate bread with butter or jam in the morning.
Now you might remark that the information gathered from weather help us tell about future weather. Yes, but it is not that information gathered from weather contains information about future weather, it is what they say “present is the key to future”, a less used brother of “present is the key to past”. We use correlations and “causations” to help us know what how the weather will be like in future from the weather at present. This actually makes weather somewhat “deterministic”, otherwise we would never be able to predict anything.
So, what type of information is in DNA that Meyer, Doug, and Dembski talk about? Well, a simple experiment for you. Download a “.jpg” from the internet, change the extension into “.txt”, and then open the file as a text file. Suddenly you would see some strange characters on screen. Can you interpret what it is? Well, if you convert that file back to “.jpg”, you will get the picture again. So, data is there, alright, in the text file. But why we don’t get anything as a text file, no information about what the picture represented? Because of the processor. Your PC will process a text file differently than an image file. Different encoding, different headers, different interpretation.

Unknown said...

What information is there in it?
051 541 061 441 561 441 311 040 741 161 441 040 241 561 441 751 751 451 311 040 661 171 040 051 361 051 321 040 741 161 441 040 361 061 071 561 721
I leave up to you to decode it. After you decode it to get relevant text, you will have to further know something to fully appreciate it
So, there we have it, contextualization, and categorization. If you supply a random man with raw data of today’s weather, he won’t understand it. What does “3 0 60 10” mean he will ask, for example. And then you will tell him that they represent temperature (in degree C), precipitation, humidity and wind speed (in km/hr).
So here we come to your next point, you said, “It is incorrect to say "information occurs after the data is processed". I do not agree. How will then you differentiate between data and information? I can say things about it but let me just link something from the Internet.
http://www.differencebtw.com/difference-between-data-and-information/
https://www.dqglobal.com/2014/05/27/what-is-the-difference-between-data-and-information/
http://www.jhigh.co.uk/Intermediate2/Using%20Information/3_data&information.html
http://www.tutor2u.net/business/reference/the-difference-between-data-and-information
You said, ”then you have defeated the claim of Dembski et al. that algorithms cannot create information”_ . Firstly, I am not entirely with Dembski there. Whenever I see terms like “not”, “never”, I become skeptical. However, if we limit ourselves to the context, that is what information Dembski is talking about, I will put myself in “probably no” or “rarely” camp. Also, when Dembski is taking about creation of information, he is not talking about creation of information from already present data, he is talking about creation of new information. Every piece of data has potential information because according to the definition, data is about something. When you have processed it, then you have the information. This “creation” is not what Dembski is talking about. He is talking about completely new information. I think you can appreciate that it is just semantics issue.
I certainly do not wish this to be that long of a comment, so I will hurry up. You said, “I realized Axe is not talking about Kolmogorov information. But that simply shows that what he calls information is not the same as what actual scientists call information.”, and I will strongly say no. I have read and done Science, I won’t mention fields, because that skews opinions, and I am willing to engage more with you, and nowhere I have seen something that won’t call what Doug calls information as not information. The reason I said not to bring Kolmogorov, is because that is off-point. Just that. I am surprised how you do not see this. Kolmogorov deals with measurement of information content. I now understand what you meant when you said that randomness produces information easily, pigeonhole principle right? Information measurement is done by reducing/compressing it (until it can’t be done further) till its meaning is not lost. That compressed value gives us the quantity of information. As random values have no correlation, they contain most information. Otherwise you can reduce information by correlation, by expressing present value as an expression of past values.

Unknown said...

Last example. The page that you have your blog pot on is “.html”.
This is something I got from your HTML - img alt="-" id="Image1_img" src="http://2.bp.blogspot.com/-gPxhkzK3mn0/VrLO3IQl4JI/AAAAAAAABOo/4KEa0_rP5SY/s227/jeff-math.jpg" style="visibility: visible;" width="226" height="227". The difference between a random “sTAHtAkpTdGv3W3nIRfH” and contents of HTML is what Doug is talking about. I can create a program that compresses HTML further because HTML has a user friendliness attached to it, so it is very easy, but that string does not contain information about anything except itself. The line “img alt="-" id="Image1_img" src="http://2.bp.blogspot.com/-gPxhkzK3mn0/VrLO3IQl4JI/AAAAAAAABOo/4KEa0_rP5SY/s227/jeff-math.jpg" style="visibility: visible;" width="226" height="227"> “ in the HTML tells the processor what to display on the page (<img), where to take the image from (src), what is the ID of the image which can be used in Javascript manipulations (id). Just like DNA.

Jeffrey Shallit said...

You completely ignored my example of varves, which provide information about "something else".

Do you or do you not agree with William Dembski when he claims that algorithms cannot produce information?

If so, how do you reconcile this with your claim that "Information occurs after the data is processed"?

If not, then what precise mathematical definition are you using? I already showed that Meyer's definition doesn't work.

Unknown said...

I wrote three seperate comments, because of word limit. I can only see one here. Did you get all three?
And what specifically you want me to address about varves? I see you nowhere in this post, talking about varves.

Jeffrey Shallit said...

The other two were marked as spam, so I didn't see them. Actually, spam seems pretty good.

Now answer my question about Dembski.

Jeffrey Shallit said...

This is not what is in DNA, and I am not sure how many times I have to emphasize it

Actually, it is. Our DNA contains instructions about ourselves, not about other creatures. In the same way, a footprint contains information about the person who made it.

What is a precise scientific or mathematical definition of "footprint", so I can give you an example you can't weasel out of?

Jeffrey Shallit said...

What is "completely new information"? What is a scientific or mathematical formulation of information so that I can tell whether it is "completely new" or not? If I tell you my age, is that "completely new information"? How about my favorite color? My VISA card number?

As for varves, just google them. Varves are examples of nature producing lots of information by a physical process.

Unknown said...

It seems that you have left the stage for an intellectual dialogue and now are becoming confrontational. I would take it as a good sign for ID's ideas. I will now, directly respond to your comments because it seems you are more bent on proving yourself then learning a new perspective.

1) No, varves do not provide information about something else. They provide information about the very process that created them. You are simply making a leap here. There are processes that create varves. By studying varves, you know about these processes. Then you apply what you know as a fact pattern about these processes to know that "something else". Not only you miss the fact that it is your extra knowledge that helps you know that "something else" but you clearly also do not realise that you are using an "intelligent mind" to know about something else, nature cannot know it by itself.

2)I have to facepalm on this one. You see no difference between information containd in DNA and "varves"? And are you really asking definition from me when you just said that information is not processed data though every resource on Internet screams otherwise?
You want a definition. Ok. Any process or any event that happens produces changes that are characteristic of the process. These changes can be used to know more about the processes that created them. So they act as "footprints", or remnants of an event. Or simply, I'm calling that information "footprint" that informs about the process(es) that created it or the data from which it is obtained

3) I told you earlier. Dembski's terminology is different than what I'm using. If you take what Dembski says in context, then I am with him, though would not use as strong terms like he uses. One very easy difference to see is that Dembski is treating what I am referring to data as information. So for Dembski DNA contains information, but under my definition, DNA contains data, and not information. Which brings to your last question -

4) In the context I was referring to, completely new information simply is the information that is not currently present either as information or in data form. We may not have been able to or just not have retrieved it but it is with us potentially.

Your VISA card number is actually a good example to take my point forward. Your card number can tell us a lot of information about the process used to create it. However, it represents something else also. It represents you, or your bank account. This information is not a footprint because it has nothing to do with the process used to manufacture the number.
When the bank just generates the number, it tells nothing about you. When it assigns you that card number, then it is totally a different type of information.

Jeffrey Shallit said...

I'm sorry, your response is so incoherent, I don't know what to say.

I repeat, what precise mathematical definition of "information" are you using? What is a precise definition of "new information"?

information is not processed data though every resource on Internet screams otherwise

Go read the definition of "information" in any book on information theory. Then you will see it is not typically defined as "processed data". But even if you think it is, how much "processing" is necessary"? Tell us how to measure what kinds of processing count and what kinds don't.

You seem to be just babbling without thinking.

Jeffrey Shallit said...

Oh, and DNA also contains "information about the process" that created it. How could it be otherwise? That is what evolution is: organisms harvest information from their environment through the process of evolution.

Unknown said...

LOL, I gave you a definition of "information". I actually cited research papers because I knew from the start that you will cry foul. You might think everybody around you is sub par, but sorry, you are living in a bubble. I have read the definition of "information" in information theory, and strictly even by those standards I don't think you are right, but you have got some dumb attitude to require me specifically read your field. Wherever you work, your group or your subject does not alone determine what information is. And if you think that is the case, go on and show your attitude to people working in other fields.
You asked me what information is. I provided you a formal definition. You asked me what is "footprint", I gave you a one line definition. You intended to corner me and provide an example, shattering my system of thought, from which I cannot "weasel out". None is forthcoming. Just platitude of words. You asked me what is "completely new information", I gave you a one line definition again, in simple words of what I mean by it. You do not like it, does not mean I am wrong. You have to show how I am wrong.
Your rant about information being "processed data" does not make sense. If you came out of your buble and head for anywhere in the Internet, you will know from scholars like you, answer to your questions. It is not that I am importing that definition from my midsummer night dream. I can very well answer all your questions, because that is like the most basic thing you learn when you learn about information.

You said that "DNA also contains "information about the process" that created it". Of course, who denied that. I said in the beginning that almost everything by your definition contains information. But this is not what Dembski or Doug is talking about. DNA in itself is just a sequence of amino acids. How the sequence is organised, is determinant on the process which created it, so yes, it is a "footprint". But what physiology this sequence will transcribe into, this is not determined by the process which created it. For function expressed by DNA realises after the DNA is manufactured. How can a function that will be expressed by DNA after its manufacture, can affect DNA during its manufacturing process? Unless you believe in some wierd retro-causality.
Another example so that you get it. A sound is produced from a ceiling fan. You can know from analysing that sound about ceiling fan,like whether it is greased or not. This is a footprint. Similarly, when a guy says,"My dad is in army", you can by tone of his voice, determine whether he has a bad throat or not. This is also footprint. But latter example contains one other type of information that the person's dad is in army. This has nothing to do with the process, but how that stream of compression and rarefaction is "transcribed" by the listener.

Jeffrey Shallit said...

You cited research papers? Which ones? Please give full citations, not things like "Davenport and Prusak (1998)". You know, list the name of the journal, volume, page numbers, etc.

You seem not to understand what a mathematician or scientist would mean by "information". Instead you are citing books on management! This is not science.

I provided you a formal definition.

No, you didn't. A formal definition would be like the definition of Shannon and Kolmogorov, which you can find in any textbook on information theory.

How can a function that will be expressed by DNA after its manufacture, can affect DNA during its manufacturing process?

Because it's not the same DNA. The DNA in me is not the same exact atoms that correspond to the DNA in my children.

Unknown said...

"You cited research papers? Which ones? Please give full citations, not things like "Davenport and Prusak (1998)". You know, list the name of the journal, volume, page numbers, etc."

You know you can search papers/books by author's name and his quote? I don't understand why I need to spoon-feed you.

_"You seem not to understand what a mathematician or scientist would mean by "information". Instead you are citing books on management! This is not science."

No, I understand. I do not understand, however, why that is relevant. Firstly, that is not management definition. That is the definition that is simple English definition that also finds application in fields of data mining, data science, statistics etc. I have seen plenty of different ways in which the term "information" is used, sometimes in relation to entropy, uncertainty, sequence, processing, relevancy, etc. I told you I am giving you a definition which I find reasonable in all fields. That definition is pretty coherent and you haven't pointed out what is wrong with it. What you are saying is that I only consider the definition prevelant in your field. But tell me why? Why is Shannon's definition better? Also, if it is a field issue, then what you are doing applying mathematical definition in biology?
Now, let us go to information theory. I have heard Shannon's information defined in two ways. Firsty, "information is what resolves uncertainty". I like that definition but it is problematic. For example, data also removes uncertainty. Is it information? Or is knowledge information? But nobody would say that data is information.
And what if you hear a false rumour. Suppose that Mr. Alpha is a drug lord. Can a rumour be sad to be information? Because on one hand, it removes uncertainty, however, it removes it in false sense.
The second is not a definition but just a measure. Shannon says that the amount of information is -log p, base used is usually binary.
Now listen very carefully. You seem to know a good deal about Shannon but do you not know the rationale. Firstly Shannon, did not derive the formula. He just picked up a function that satisfied properties of information. So, yes, you can have any other function, if that function also satisfies some basic properties that Shannon used.
Secondly, dude, this definition is relevant only in specific cases,like in case of transmission of information over channels because it was developed for it. He developed a measure to quantify needs, efficiency and load on a channel or transmission mechanism. For simply, in case of Shannon, the two strings "Hillary won US Presidential Election" and "D Trump won US Presidential Election" both have same amount of information, assuming all characters are equally likely. You know one is false. But if you are transmitting this over a channel, channel does not give a care whether Trump won or Hillary.
You are using a definition that is grossly inadequate for the field and unfair to Doug, Meyer and Dembski. If you treat meaningless crap and a true fact to have same information content, just because they have same length, then you are actually kicking intelligence out of the picture.
So no. Shannon's definition is not the Bible in this case. And also my defintion is formal. It is not mathematical because I am not trying to measure information but define it.

"Because it's not the same DNA. The DNA in me is not the same exact atoms that correspond to the DNA in my children."
I have no clue how that relates to what I said. I am saying that DNA contains information/data used by your body to make decisions. This has nothing to do with how DNA is made but how it is interpreted. So, how is that not different?

Jeffrey Shallit said...

I'm really sorry, but your arguments are so incoherent, I cannot respond to them. All I can suggest is that you read a basic textbook on information theory. Without this, I think you will be mired in imprecise definitions and analogies.

Unknown said...

Cool. And I will say that you are so much into your bubble that you won't even put your head up and look beyond your version of information theory. This is the first sign of falling establishment and you represent it unfortunately. If Claude E. Shannon would have known that people in future will employ his channel transmission theory well beyond it realms and to subvert any contrariety, he would have dug his head into the ground like an ostrich.

Jeffrey Shallit said...

It's not "my" version of information theory, but the ones that are used in nearly every field of science and engineering.

*Your* version is an incoherent mess that does not appear to have any use, nor any predictive power.

You know nothing about Shannon, and you have no basis whatsoever to claim what he would or would not have done.

Anonymous said...

Professor Shallit, as a relatively uneducated observer of this exchange, I must say that your arrogant tone and recourse to ad hominem attack and appeals to authority in lieu of coherent explanation (i.e. logical fallacies), does not reflect well on you as a public educator.

As a lawyer, I don't resort to the jargon of my field in order to shut someone down. All I see from you is someone who waves his hands and shouts "Kolgomonov" as a means of proving a contention about evolution.

Jeffrey Shallit said...

It's "Kolmorogov", not "Kolmogonov".

"Kolmogorov complexity" is not jargon, it's essential to understanding my point. If you are unwilling to spend the time to understand the concept, perhaps the problem is with you and not me.

I entirely reject your claims of "ad hominem" and "appeal to authority".

Beyond said...

I've been trying to determine (with limited knowledge) how Axe failed to show his point.
It seems the arguments in this blog's responses center around what type of information Axe was referring to and yet I thought that part of his book was rather easy to sort out.
At one point in the history of this planet there was no DNA.
I think Axe is asking where the information to form that first strand of DNA (or the proteins necessary for that to become something alive) came from.
Basically, with no prior DNA in existence, how did the first one form?
If the all proteins to do so could not simply form themselves in numbers, and in numbers be readily able to form a chain of DNA (without direction or guidance or manipulation) and be ready to also reproduce through that process... then clearly they were not the product of a random chance process.
IN my understanding, Axe simply showed that these things couldn't have developed themselves without some type of intelligent intervention... thus the question, "Where did the information come from (to develop the first proteins by chance that would form the first DNA by chance, and then spring to life by chance)."

Jeffrey Shallit said...

Beyond:

I think you have a significant misunderstanding of how we currently think DNA formed. It's not like some first organism sprang into existence magically with the DNA instructions to reproduce it.

One current theory is that RNA came first. This would have been "naked" RNA that reproduced itself, long before DNA and proteins. You can read it about here: https://en.wikipedia.org/wiki/RNA_world .

Beyond that, there is plenty of information in the environment that can be "harvested" through the process of differential reproduction and imperfect replication. Everybody who's studied the issue knows this, so why doesn't Axe?

Beyond said...

Whether it was DNA or RNA that came first... even if incomplete or imperfect... it still has to eventually formulate to something complex. .. based on what plan? Assembly of random elements to something complex forms and lives. ..

Jeffrey Shallit said...

It looks like you don't understand evolution. There is no "plan"; evolution is governed by mutation and natural selection.

JimV said...

I don't understand what the deniers are trying to say about information either, but I think they may be trying to distinguish different kinds of information. Say, old information (footprints?) versus new information (an invention?). Certain kinds of information, they seem to be saying, are magic and/or can only be obtained by magic.

In my view, there is no new information in an objective (rather than subjective) sense. All possible information that could be available in this universe existed in potential when this universe was formed; it merely had to be discovered. Watt didn't create a steam engine (in the sense of creating something new ex nihilo), he discovered it. Edison didn't create a light bulb, he discovered a primitive way of making a practical one. The same with everything else human intelligence (or computer intelligence, or chimpanzee intelligence) takes credit for.

No magic is necessary for this. Trial and error will find something if it exists and we try hard enough. Edison tried about a 1000 ways to make a practical light bulb filament before finding coarse thread with soot embedded in it (and still later, bamboo fiber). Einstein tried lots of different functions to base General Relativity on. They didn't and couldn't create anything whose potential to exist and work wasn't already inherent in this universe.

So basically, human intelligence and design work pretty much the same way biological evolution works: keep trying different things, discard the ones that work badly, keep the ones that work well enough for now, and pass on the ones that work to future generations by some form or forms of memory (genes, textbooks, etc.)

This tends to snowball into greater and greater complexity and usefulness. Assuming that modern human brains go back at least 20,000 years (humanoid remains go back close to ten times that, but we can't tell if the soft tissue was the same), it took us at least 14,000 years to find the wheel-and-axle (according to archaeological evidence). But once you have that technology you can have carts, windlasses, pulleys, gears, ..., and PC hard drives. Similarly, modern bacteria have a huge library of genes, that can be modified by trial and error to digest nylon and citrate solutions. (Many new pharmaceuticals have been found by giving bacteria a problem and seeing how they solve it.)

No magic is necessary, so there is no need for a magical Intelligent Designer--and no evidence for one. By its incomprehensible nature, magic is never a satisfactory explanation anyway. It is just an excuse for not having an explanation.