Author: Matt Taylor
Date: 12:57:05 12/17/02
Go up one level in this thread
Science fantasy is always enjoyable. I'm not sure what the author means by "self-modifying code is mostly experimental." People were writing self-modifying code in the 70's. (Anyone remember Data General?) They were still doing it in the 80's. I worked on un-self-modifying some self-modifying code that was written in the early 90's. Just last night I was suggesting to a friend that he employ self-modifying techniques to optimize a project of his once he has data. (e.g. don't do a calculation that always results in a multiplier of 0 or 1, or an addend of 0). It's not new. In fact, I have thrown around the idea many times with other people, and I'm currently implementing such a project. I must also raise an eyebrow at the thought of a natural selection algorithm optimizer. The number of permutations to modify a single instruction are immense. To change things and make them work right, you need a fair number of modifications all at the same time. The probability sinks dramatically low as you add more complexity. The Biological debate has always raged over irreducibly complex systems. Since it's the same question here, I would argue that code would be an irreducibly complex system. You have a bunch of registers. When one changes, all the others need to as well. Changing one instruction won't even produce code that works. Of course, in simpler instruction sets it is not as necessarily complex, but the difference between Biochemistry and Computer Science is that you can get similar compounds by flipping genes in Biochemistry. Computers are designed so you -can't-, and even if you could, you still wouldn't be doing anything useful in most cases. -Matt
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.