The point was that because a modern computer isn't even an abstraction of a UTM, because of the lack of infinite memory, and because of the slow processor, the Church-Turing thesis isn't immediately applicable to a computer. Especially early computers that have even more speed and memory limitations.
We're getting side-tracked on the theorem here. It's very clear that earlier transistors will not speed up the development of it, so let's just drop it, okay?
How much could you do with those circuits? A surprising amount; six years or so ago, I built a working phone using some very simple capacitors, transistors, and a few other small things.
You can build a surprising number of things. Which is the point I was trying to make.
But can you build a decent computer with a handful of transistors? Not really.
You can't build the kind of computer with anything near the kind of capabilities we unconsciously and routinely assign to computers, but we've have rather primitive machines which have "computed" since the Industrial Revolution kicked off.
If those primitive machines were useful, and cam-operated screw machines are so useful they're still used worldwide, what sort of additional utility could we get from a few transistors?
You can build neat little devices, but building a transistor version of something like the ENIAC? You're not going to have nearly enough to make a difference, especially if you're merely "tinkering" and not mass producing.
What would a few dozen transistors do to Bletchley Park's bombes? Or the Huff Duff sets aboard a RN escort?
If you do have highly specialized machines that are not only cheap enough to be built by a university, and would benefit from a transistor (an important part, since the POD involves transistor development), the architecture wouldn't be anything like an actual computer. It wouldn't be something you'd be able to generalize.
Wouldn't the fact that you couldn't generalize such a machine, despite it's utility in the purpose it was built for, provide a slight impetus towards producing a machine which could be generalized?
Think of a simple adder (since I don't know what your "engineering" background entails, the circuit diagram is
here). Let's say someone connects a bunch of them together to make a 128 bit electrical adder. Let's even assume they use transistors to do it, to fit in with OP. Now generalize that adder to be a modern processor. Does that demand really make sense?
How many purposes can a general adder be used for? Do you think telephone companies, among others, might have use for one in their switching facilities? Would you be surprised to learn they had a mechanical version of an adder which was a maintenance queen?
Can you take an adder and, with a few simple changes, make it into a Turing-complete processor? Not really. Making it Turing-complete would require you to completely revamp the whole system. It's nonsensical to compare a little adder with a processor.
I'm not suggesting we build Turing-complete or UTM machines with them. I'm suggesting that, because "primitive" machines in the OTL had surprisingly computational abilities for a fixed series of tasks, that transistors could possibly expand the number of machines with those abilities and/or expand the number of abilities those machine have.
Likewise, it'll be nonsensical to compare the little machines a research professor might use with a general processor, especially since professors are usually very specialized (a little less so back then, but still a generally true statement).
The little machine one professor has might spur another professor to make his own machine. And, with all those little machines purring in all those labs, someone might just start thinking about replacing them all with a universal machine. Or is that too far fetched?
A biologist looking at an astronomer's highly specialized astronomical computer and wondering if it could apply to insect population simulations makes little more sense than a biologist looking at an astronomer's telescope and wondering if that could be generalized to apply to insect populations.
Of course, that makes complete sense.
Naturally, the biologist will see the benefits of the astronomer's machine, think about the type of machine he could possibly use, build it if he's able, and then we're right back to the situation I mentioned above: hundreds of specially built machines all being used for special purposes and someone saying
"Hmmm..."
It doesn't matter how much a Jacquard loom can do, transistors + Jacquard loom or other such "computators" (a term I dislike because it's kind of misleading) aren't really going to spur development in computational theory.
I'm not interested in computational theory. I'm interested in people using more devices with computational abilities because transistors might make some of those devices relatively more widespread. Practice far more often outstrips theory than the other way around.
I don't like the term "computator" either but I've been using it in an attempt to differentiate computers from machines with computational abilities. The term "computers" is weighed down with too many assumptions. I had hoped that avoiding "computer" and using "computator" would help avoid those assumptions.
I'll use the Jacquard loom as an example because you're hung up on it. That loom had computational abilities. It was programmed via tapes or cards to produce different woven patterns, but just what it could be programmed to do was severely limited by it's physical nature. You couldn't feed it a set of cards and have it calculate asteroid orbits, but you could "hack" it to produce weaving patterns no one had even conceived of when the machine was first constructed.
What I'm trying to point to the "straights" here is that computational 'scale" exists. The UTM is all the way at one end and a device which cannot compute at, a "rock", is all the way over on the other end. When we say "computer" in 2011, we're assuming a device which is much closer to the UTM than the "rock". However, there existed and still exist devices which can perform more computations than the "rock" but no where near the level of computations performed by a computer or UTM. The Jacquard loom was one, screw machines are another, and there are thousands of other examples both current day and historical each sitting along that computational scale.
Fire control systems aboard warships had computational abilities which grew in size and sophistication. How could a handful of transistors change that development? Telephone switching systems had computational abilities which grew in size and sophistication too. Again, how could transistors change that? Ditto encryption/decryption machines. Ditto frequency hopping radio transmitters. Ditto radar, sonar, direction finding, bomb sights, and avionics. The list for military applications alone is nearly endless.
We don't need to build anything that remotely resembles what we think is a "computer" for transistors to help speed the develop of machines with computational abilities. That's one of the things I was suggesting.
I was also suggesting that existence of more machines with computational abilities could spur the development of "true" computers. With practical, everyday uses all around and more equipment to "tinker" with, there would be less need for many of the equipment-less "thought experiments" of the OTL to spur development. Your excellent explanations about the Entscheidungsproblem mean that idea is not plausible.