top of page

DAN Coded AI? In The DNA

Updated: Mar 3, 2021


At Ones first and simplest attempts to philosophize, one becomes entangled in questions of whether one knows some thing, one knows, that one knows it, and what, when one is thinking of oneself, is being thought about, and what is doing the thinking. After one has been puzzled and bruised by this problem for a long time, one learns not to press these questions: the concept of a conscious being is, implicitly, realized to be different from that of an unconscious object. In saying that a conscious being knows something, we are saying not only that he knows it, but that he knows that he knows it, and then he knows that he knows that he knows it, and so on, as long as we care to pose the question: There is, we recognize, an infinity here, but it’s not an infinite regress in the bad sense, but is the questions that peter out, as being pointless, rather than the answers. The questions are felt to be pointless because the concept contains within itself the idea of being able to go on answering such questions indefinitely. Although conscious beings have the power to keep going on, we do not wish to exhibit this is a succession of tasks they are able to perform, nor do we see the mind is an infinite sequence of selves and super-selves and super-selves selves. Rather, we insist that a conscious being is unity, and that we talk about parts of the mind, we do so only as a metaphor, and will not allow it to be taken literally. The paradoxes of consciousness arise because a conscious being can be aware of itself, as well as other things, and yet cannot really be construed as being visible in the parts. It means that a conscious being can deal with a Gödelian questions in a way in which a machine cannot, because conscious beings can both consider itself and its performance and yet not be other than that which did the performance. A machine can be made in a manner of speaking to “consider “it’s performance, but It cannot take this “into account“ without thereby becoming a different machine, namely the old machine with the “new part“ added. But it is inherent in our idea of a conscious mind that it can reflect upon itself and criticize its own performances, and no extra part is required to do this: it is already complete, and has no Achilles‘ heel. The thesis thus begins to become more of a matter of conceptual analysis than mathematical discovery. This is borne out by considering another argument put forward by Turing. So far, we have constructed only fairly simple and predictable artifacts. When we increase the complexity of our machines, there may, perhaps, be surprises in Store for us. He draws a parallel with fission pile. Below a certain “critical“ size, nothing much happens: but above the critical size, the sparks begin to fly. So too, perhaps, with brains and machines. Most brains and all machines are, at present, “sub critical“ they react to incoming stimuli in a Stodgy and uninteresting way, have no ideas of their own, can produce only stock responses at present, but a few brains at present, and possibly some machines in the future, are super critical, and scintillate on their own account. Turing is suggesting that it only a matter of complexity, and then above a certain level of complexity a qualitative difference appears, so that “super critical“ machines will be quite unlike the simple ones hitherto envisioned. This may be so. Complexity often does introduce qualitative differences. Although it sounds implausible, it might turn out that above a certain level of complexity, a machine seems to be predictable, even in principle, and starts doing things on its own account, or, to use a very revealing phrase, it might begin to have a mind of its own. It would begin to have a mind of its own when it was no longer entirely predictable, even in principle, but was capable of doing things that we recognize as intelligent, and not just mistakes or random shots, but which we had not programmed into it. But then it would cease to be a machine, within the meaning of the act. What is at stake in the mechanistic debate is not how minds are, or might be, brought into being, but how they operate. It is essential for the mechanist thesis that the mechanical model of the mind should operate according to “mechanical principles,“ that is, that we can understand the operation of the whole in terms of the operation of its parts, and the operation of each part either shall be determined by its initial state in the construction of the machine, or shall be a random choice between a determinate number of determined operations. If the mechanist produces a machine which is so complicated that this ceases to hold good of it, then it is no longer a machine for the purposes of our discussion, no matter how it was constructed. We should say, rather, that he had a creative mind, and the same sort of sense as we procreate people at present. There would then be then be two ways of bringing new minds in the world, that traditional way, by begetting children born of women; and a new way by constructing very, very complicated systems of, say, valves and relays. When talking of the second way, we should take care to stress that although what was created looked like a machine, it was not one really, because it was just the total some of it's parts: one could not even tell the limits of what it could do, for even when presented with a Gödel-type question, it got the answer right. In fact, we should say briefly there any system which was not floored by the Gödel question was eo ipso not a touring machine, i.e. not a machine within the meaning of the act.





This process will create two new double strands of DNA each identical to the original one. Now if our solution is to be based on this idea, it must involve a set of proteins, coded for the DNA itself, which will carry out these two next steps. It is believed they’re in cells, these two steps ("unravel the two strands from each other" and "make a new strand to each of the two new single strands") are performed together in a coordinated way, and they require three principal enzymes: DNA endonuclease, DNA polymerase, and DNA ligase. The first is an unzipping enzyme: it peels the two original strands apart from a short distance and stops. Then the other two enzymes come into the picture. The DNA polymerase is basically a copy-and-move enzyme: it shrugs down the short single strands of DNA, copying them complementary in a fashion reminiscent of the copy mode in Typogenetiics. In order to copy, it draws on raw Materials- specifically nucleotides – which are floating about in the cytoplasm. Because the action proceeds in fits and starts, with some unzipping and some copping each time some short gaps are created, the DNA ligase is what plugs them up. The process is repeated over and over again. This Precision three-enzyme machine proceeds in careful fashion all the way down the length of the DNA molecule, until the whole thing has been peeled apart and simultaneously replicated, so that there are now two copies of it.


Note that in the enzymatic action of the DNA strands, the fact that information is stored in the DNA is just plain irrelevant: the enzymes are merely caring out there symbol shunting functions, just like the rules of inference in the MIU system (Hofstader's interface for producing derivations) It is of no interest to the three enzymes that at some point they are actually copying the very jeans which coded for them. The DNA, to them, it’s just a template without meaning or interest. It’s quite interesting to compare this with the QUINE (computer program which takes no input and produces a copy of its own source code as its only output. The standard terms for these programs in the computability theory and computer science literature are "self-replicating programs", "self-reproducing programs", and "self-copying programs") sentences method of describing how to construct copy of itself. There, too, one has a sort of “double strand “– two copies of the same information, where one copy acts as instructions, the other is template. In DNA, the process is vaguely parallel, since the three enzymes ( DNA in the nucleus, DNA polymerase, DNA ligase) are coded for in just one of the two strands, Which therefore acts as a program, while the other strand is merely a template. The parallel was not perfect, for when copying is carried out, both strands are used as a template, not just one. Nevertheless, the analogy is highly suggestive. There’s a bio chemical analog to the use-mention dichotomy: when DNA is treated as a mere sequence of chemicals to be copied, it is like mention of typographical symbols: when DNA is dictating what operations shall be carried out, it’s like the use of typographical symbols.


There are several levels of meaning which can be read from a strand of DNA, depending on how big the chunks are which you look at, and how powerful a decoder you use. On the lowest level, each DNA strand codes for an equivalent RNA strand- The process of decoding being transcription. If one chunks the DNA into triplets, then by using a “genetic disorder “, one can read the DNA as a sequence of amino acids this is translation(on top of transcription) on the next natural level the hierarchy, DNA is readable as a code for the set of proteins. The physical pulling out of proteins from the genes is called gene expression. Currently, this is the highest level which we understand what DNA means.


However, there are certain to be higher levels of DNA meaning which are harder to discern. For instance, there’s every reason to believe that the DNA of, say, a human being, codes for such features as nose shape, music talent, quickness of reflexes, and so on. Could one, in principle, learn to read off such a piece of information directly from a strand of DNA, without going through the actual physical process of epigenesis – the physical pulling out a phenotype from genotype? Presumably, yes, since- in theory – one could have an incredibly powerful computer program simulating the entire process, colluding every cell, every protein, every tiny feature involved in the replication of DNA, of cells, to the bitter end. The outer output of such a pseudo-epigenesis program would be a high-level description of the phenotype.


There is another extremely faint possibility: that we could learn to read the phenotype off of the genotype without doing an isomorphic simulation of the physical process of epigenesis, but finding some simpler sort of decoding mechanism. This could be called shortcut pseudo epigenesis. With a shortcut or not psudo-epigenesis is, of course, totally beyond reach at the present time- with one notable exception: in the species felis catus, deep probing has revealed that it is indeed possible to read the phenotype directly off of the genotype. The reader will perhaps better appreciate the remarkable fact that after directly examining the following section of the DNA of felis catus: with that being said, the DNA can be read as a sequence of:


Is believed they’re in cells, these two steps unravel the two strands from each other and make a new strand to each of the two new double strands of DNA each identical to the original one. now if our solution is to be based on this idea, it must involve a set of proteins, coded for the DNA itself, which will carry out these two next steps It is believed that in cells, these two steps are performed together in a coordinated way, and they require three principal enzymes: DNA endonuclease, DNA polymerase, and DNA ligase. The first is an unzipping enzyme: it peels the two original strands apart from a short distance and stops. Then the other two enzymes come into the picture. The DNA polymerase is basically a copy-and-move enzyme: it shrugs down the short single strands of DNA, copying them complementary in a fashion reminiscent of the copy mode in Typogenetiics. In order to copy, it draws on raw Materials- specifically nucleotides – which are floating about in the cytoplasm. Because the action proceeds in fits and starts, with some unzipping and some copping each time some short gaps are created, the DNA ligase is what plugs them up. The process is repeated over and over again. This Precision three-enzyme machine proceeds in careful fashion all the way down the length of the DNA molecule, until the whole thing has been peeled apart and simultaneously replicated, so that there are now two copies of it. Note that in the enzymatic action of the DNA strands, the fact that information is stored in the DNA is just plain irrelevant: the enzymes are merely caring out there symbol shunting functions, just like the rules of inference in the MIU system. It is of no interest to the three enzymes that at some point they are actually copying the very jeans which coded for them. The DNA, to them, it’s just a template without meaning or interest. It’s quite interesting to compare this with the QUINE sentences method of describing how to construct copy of itself. There, too, one has a sort of “double strand “– two copies of the same information, where one copy acts as instructions, the other is template. In DNA, the process is vaguely parallel, since the three enzymes ( DNA in the nucleus, DNA polymerase, DNA ligase) are coded for in just one of the two strands, Which therefore acts as a program, while the other strand is merely a template. The parallel was not perfect, for when copying is carried out, both strands are used as a template, not just one. Nevertheless, the analogy is highly suggestive. There’s a bio chemical analog to the use-mention dichotomy: when DNA is treated as a mere sequence of chemicals to be copied, it is like mention of typographical symbols: when DNA is dictating what operations shall be carried out, it’s like the use of typographical symbols. hines. Most brains and all machines are, at present, “sub critical “they react to incoming stimuli in a Stodgy and uninteresting way, have no ideas of their own, can produce only stock responses at present, but a few brains at present, and possibly some machines in the future, are super critical, and scintillate on their own account. Turing is suggesting that it only a matter of complexity, and then above a certain level of complexity a qualitative difference appears, so that “super critical “machines will be quite unlike the simple ones hitherto envisioned. This may be so. Complexity often does introduce qualitative differences. Although it sounds implausible, it might turn out that above a certain level of complexity, a machine seems to be predictable, even in principle, and starts doing things on its own account, or, to use a very revealing phrase, it might begin to have a mind of its own. It would begin to have a mind of its own when it was no longer entirely predictable, even in principle, but was capable of doing things that we recognize as intelligent, and not just mistakes or random shots, but which we had not programmed into it. But then it would cease to be a machine, within the meaning of the act. What is at stake in the mechanistic debate is not how minds are, or might be, brought into being, but how they operate. It is essential for the mechanist thesis that the mechanical model of the mind should operate according to “mechanical principles,“ that is, that we can understand the operation of the whole in terms of the operation of its parts, and the operation of each part either shall be determined by its initial state in the construction of the machine, or shall be a random choice between a determinate number of determined operations. If the mechanist produces a machine which is so complicated that this ceases to hold good of it, then it is no longer a machine for the purposes of our discussion, no matter how it was constructed. We should say, rather, that he had a creative mind, and the same sort of sense as we procreate people at present. There would then be then be two ways of bringing new minds in the world, that traditional way, by begetting children born of women; and a new way by constructing very, very complicated systems of, say, valves and relays. When talking of the second way, we should take care to stress that although what was created looked like a machine, it was not one really, because it was just the total some of it's parts: one could not even tell the limits of what it could do, for even when presented with a Gödel-type question, it got the answer right. In fact, we should say briefly there any system which was not floored by the Gödel question was eo ipso not a touring machine, i.e. not a machine within the meaning of the act.


1)bases (nucleotranscription) ...translation

3) proteins (primary structure) ...gene expression

4) proteins (tertiary structure) ...gene expression

5) protein clusters ...high levels of gene expression

^???? ...Unknown levels of DNA meaning

n-1) ??????

N) physical, mental, and psychological traits ...pseudo epigenesis.



Note the base pairing of A and T(arthemezitation and translation) and as well as of G and C (Gödel and crick) Mathematical logic gets the purine inside, and while molecular biology gets the pyrimidine side. To complete the aesthetic side of this mapping, I chose to have a model and Gödel-numbering scheme on the genetic code absolutely faithfully in fact. the table of the genetic code becomes the table of the Gödel code. Each amino acid of which there are 20 corresponds to exactly one symbol of TNT which they are 20. Thus, at last, my motive for concocting "austere TNT" comes out- so that there would be exactly 20 symbols! the Gödel code is shown

One of the most interesting similarities between the two sides of the map is the way in which “loops" of arbitrary complexity arise on the top level of both: on the left, proteins which act on proteins which act on proteins and so on, ad infinitum; in on the right, statements about statements about statements of meta-TNT and so on, ad infinitum. These are like heterarachies, where is sufficiently complex substratum allowing high-level strange loops to occur and a cycle around, totally sealed off from the lower levels. Incidentally, you may be wondering about the question “what, according to the central Dog map, is Gödel's incompleteness theorem itself mapped onto? it turns out the central dog map is quite similar to mapping that was laid out between contracrostipunctus and Gödel’s theorem. One can therefore draw parallels between all three systems: 1) formal systems and strings

2) cells in strands of DNA

3) record players and records



Comments


IMG_4728.jpg
IMG_4728.jpg
IMG_4728.jpg

©2020 by The DANalytics, Sylva NC 

bottom of page