How do computers develop with earlier transistors?

"And then engineers will hit another brick wall pretty quickly before someone thinks up ICs"

http://en.wikipedia.org/wiki/Darlington_transistor

Mr Darlington had the brilliant idea to connect two or three transistors on same die. He used it to compound the modest gains from a pair or prial of transistor stages, while keeping the temperature tracking manageable. Urban Legend holds that he wanted to patent 'or more', too, but his lawyer talked him down...

Had that extra phrase been in, he would have collected on *every* IC produced for the following decades...

Interesting. I wasn't familiar with those.

But my point was more that they'd slow down and any tech gains over RL would quickly reduce as other sciences (computer, chemical, and electrical engineering) would need to catch up to OTL conditions. This TL will start off 25 years ahead in one small area. But by the 50s, by the 60s, by the 70s... the TL will only be a couple of years ahead. The intermediate phase of the Darlington transistor is just that: an intermediate phase. Computers would still need to move on to ICs eventually, and that time won't be too much ahead of OTL.
 
What I'm seeing, in terms of computer evolution:

Early 1920's -- The
By 1930 -- someone invents what OTL calls the Darlington Transistor
1936 (or earlier) -- Church-Turning Thesis
By 1940 -- An electronic computer using transistors (equivalent of OTL TRADIC)
By 1960 -- The Microprocessor is developed

So, all in all, a 25 year head start in transistors looks like it becomes a decade lead by the 1960's...
 
"And then engineers will hit another brick wall pretty quickly before someone thinks up ICs"

http://en.wikipedia.org/wiki/Darlington_transistor

Mr Darlington had the brilliant idea to connect two or three transistors on same die. He used it to compound the modest gains from a pair or prial of transistor stages, while keeping the temperature tracking manageable. Urban Legend holds that he wanted to patent 'or more', too, but his lawyer talked him down...

Had that extra phrase been in, he would have collected on *every* IC produced for the following decades...

Lately, Darlingtons were more usually used in audio or operational amplifiers [1] i.e. analog rather than digital applications.

However, the current gain (Hfe) in early transistors was probably lousy, so for RTL, a Darlington would be useful.
For all non-EE readers, see how RTL uses only one transistor for a multi-input inverting gate. This is a good logic family for discrete transistors and resistors.

Basically, RTL will come into play as soon as transistors can be mass-produced, and you can build any logic circuit from a 2-input NOR.

R

[1] If you scroll down three quarters of the way down the linked article. you can see a schematic. Q15 and Q19 are the Darlington configuration.
 
A big result of earlier transistors is better payloads by weight for the space program. The engineers at STL made the precursors of ICs in the end of the 50s, and transistors were brand new in implementation.

A lighter payload means, all else being equal, America has lunar orbiting probes by 1959 (as we won't need the ill-fated Atlas Able to launch them).

When did the Soviets invent transistors?
 
When did the Soviets invent transistors?

Long before Sputnik. I don't know exactly when the Soviets had transistors, but OTL, high quality American transistors were easy to find in simple electronics in the 50s and had public patents and research papers decades before that (20s). The Soviets could've easily duplicated any of that, since all the technology is completely open to the public. The only issue is manufacturing, and that's something the Soviets could've duplicated rather quckly.
 
What I'm seeing, in terms of computer evolution:

Early 1920's -- The
By 1930 -- someone invents what OTL calls the Darlington Transistor
1936 (or earlier) -- Church-Turning Thesis
By 1940 -- An electronic computer using transistors (equivalent of OTL TRADIC)
By 1960 -- The Microprocessor is developed

So, all in all, a 25 year head start in transistors looks like it becomes a decade lead by the 1960's...

As solomaxwell6 says, you can't keep transistors as a military secret. They weren't secret in the US after 1947, which was the cold war, so expect the know-how to get around the world.

Then we're likely to see a few trends:

* Early smaller radios. Cheap portable transistor radios by the end of the 30s. In the military, this means handheld or portable field radios (think of something like a briefcase with headest attached, or a bigger brick phone from the early 80s).

* Radar gets a lot smaller. Still need the dish, but the back end gets smaller. Radar on every ship and a few large aircraft by WW2. Mobile radar stations on the ground.

* Radio controlled missiles/guided bombs.
As a spinoff, earlier space launches and satellites.

* A much more complex cipher war, where each side races to decode the others' messages.

* possibly earlier TV. Certainly earlier color TV

R
 
About the Church-Turing thesis...

It's my understanding that's it central to the Turing machine idea which then leads to the universal Turing machine idea. Because our desktop, laptops, and palmtops are all essentially universal Turing machines we tend to automatically equate "computer" with "universal Turing machine" when that hasn't always been the case.

We've had computers or. more accurately, computators around for a long time. They were computing machines physically constructed to perform either a specific task or a specific range of tasks whereas a universal Turing machine can be programmed to perform any task. There were and still are thousands of examples of these "computators" in use.

A screw machine is a "computator". You can install different cams and make different adjustments to make different parts. A Jacquard loom is a "computator" too, you provide different cards and thread to make different linens. Adding machines, electro-mechanical or purely mechanical, are "computators". The Norden bombsight and the Enigma machine were "computators" too.

Transistors should make these task and/or tasks specific "computators" more useful, more easily constructed, and more easily adapted for more uses. The Church-Turing thesis is only needed for Turing machines and the following universal Turing machine. It isn't needed to build computers at all, just the kind of computers we use everyday.
 
That's a good point, Don -- do you think these earlier, better developed "computators" lead to a quicker development of universal turing machines following the thesis?

Oh, fine points by Rosistor as well -- really nicely illustrates what I think we're getting at...
 
That's a good point, Don -- do you think these earlier, better developed "computators" lead to a quicker development of universal turing machines following the thesis?


I don't know. :eek:

The work done by Turing and Church was purely theoretical. It was another example of those wonderful "thought experiments" so many geniuses engage in.

Mathematicians have this annoying habit of pondering and solving incredibly esoteric matters than have absolutely no utility...

... until someone notices that the Whosis of Whatsis Theorem neatly describes the experimental data they're reviewing! :D

Church and Turing did their work on the theorem with no "impetus" other than their own desire to figure things out. However...

  • ... if earlier transistors mean more "non-Turing computators" in use...
  • ... and Turing, Church, or other mathematicians are aware(1) of that use...
  • ... wouldn't more use of "non-Turing computators" spark inquiries into theoretical mathematics involving the use and operation of "computators" and other computing machines?

I think the theorem would arise sooner thanks to earlier transistors. From what I can understand, there's nothing within the theorem which requires mathematical advances that hadn't occurred before 1900 or even earlier. All of the "parts" are already there, all we need is a reason for someone to put them together in a certain manner.


1 - It's certain that universities would have the "non-Turing computators" I'm nattering on about each specifically built for working on a limited tasks involving astronomical problems and the like. The understandable desire for researchers to use a machine designed for one purpose for another purpose could lead to an earlier Church-Turing theorem; i.e. If only we had built it this way, you could use it for your asteroid orbit equations and I could use it for my insect population studies....
 
Ah, so a Church-Turing Thesis equivalent (and, by extension, UTM's) can come about earlier ITTL? Interesting... :D


Yes, because it wouldn't be the result of a thought experiment of no apparent utility which could have only been conceived by one or two people. Instead, someone wanting to do something could come up with a "working" theorem that someone else later formalizes.

Putting it another way, the desire to "hack" an existing "computator" into providing a greater functionality will be a bigger spur than "dreaming" about the possible abilities of machines which don't exist.

The boring old "someone wanting to do something" is usually how advances get made. As I pointed out in the Current Wars threads, even Tesla didn't foresee the need for a mercury arc rectifier. Instead, it was some everyday schmoe who wanted to use AC for more things at greater distances who developed it. And, after they developed it, they went right back to their "real" job.
 
As solomaxwell6 says, you can't keep transistors as a military secret. They weren't secret in the US after 1947, which was the cold war, so expect the know-how to get around the world.

They weren't even a secret in OTL 1925! :eek:

Ah, so a Church-Turing Thesis equivalent (and, by extension, UTM's) can come about earlier ITTL? Interesting... :D

Nope. Don Lardo's wrong. The Entscheidungsproblem was developed formally in the late 20s or so. After this, it'll take years of thought and research to actually solve it. A research paper isn't the work of a few months. It's the work of years. That means that at best, the Church-Turing thesis won't be developed until the mid-30s (Church's Lambda calculus didn't come about until 1937). So, potentially there will be butterflies that move it early, but not much earlier than OTL, and only because of butterflies, not for any of the sorts of reasons Bill mentioned.

The Church-Turing thesis is not necessary to generalize computer science problems. In fact, it's almost purely theoretical. The idea that an actual physical computer is a UTM is silly. Quite simply, it's not. Even ignoring the definition of a Turing machine (a state machine with a tape), a Turing machine requires infinite tape. Even if we consider a modern computer to be an abstraction of a Turing machine (which we do sometimes in theoretical work), a computer is not a Turing machine.

Why does this matter? Because it means many of the implications of the Church-Turing Thesis don't apply. You can use it to figure out that, say, it is actually possible to solve the Traveling Salesman Problem. Would this matter to a computer engineer in the 30s? Probably not. Computers weren't anywhere near powerful enough to solve the TSP (even now, we only do the TSP in very specific cases). In terms of solvability in the 30s, it's far better to focus on computers built to solve very specific, specialized tasks. The more you can implement in hardware, the less you need to do in software, which means increased specialization but MUCH better efficiency. It doesn't matter if your computer is Turing complete if it's incredibly inefficient; consider the Z3 computer which was technically Turing complete, but in practical terms couldn't do anything like even a very primitive Turing complete computer designed slightly differently.

The more useful question: what can be solved efficiency (it's typically called P time, for "deterministic polynomial") is not answered by the Church-Turing thesis, and didn't really take off until computability theory really evolved (the Church-Turing thesis started computability theory, but the science didn't take off for decades afterward). The results of the Church-Turing thesis are just too general and vague for it to be of any relation to early computer engineering, it was just an interesting math question.

Moving on to Don's "computator" idea. Again, the electrical engineering of the 20s and 30s just wasn't good enough to support mass-production of earlier transistors. Early transistors would be primarily a theoretical toy that would be used in small quantities for certain mostly non-computational applications. This is why I and later Roisterer brought up radios. Early transistors would be used for a certain kind of miniaturized luxury (or more likely military) radios. Perhaps occasional hearing aids, etc. But not for computers. The computing paradigm at the time didn't involve miniaturization; no one would've cared about the potential for transistors to make computers smaller/more powerful, especially since PC transistors developed in 1922 would be much more expensive than in 1947.

Early transistors are not going to be able to mass-produced enough to really drastically increase the amount of computation able to be performed. As I've said several times in this thread, computer science wasn't developed enough to truly take advantage of transistors, and it'd be too expensive to produce many, anyway. No one's going to be saying "If only your astronomy machine could computer my population figures" if the university can't afford one of these astronomy machines! Most of the examples he gave of "computators" were things that wouldn't and couldn't have used transistors. Where in a Jacquard loom would you put a transistor?

Again, computer scientist who actually knows what he's talking about here.
 
Nope. Don Lardo's wrong. The Entscheidungsproblem was developed formally in the late 20s or so.


Okay, I understand that. No bump to the development of the Church-Turing theorem.

The idea that an actual physical computer is a UTM is silly. Quite simply, it's not.

I never meant to suggest it was and I'm sorry if I did.

I did want to suggest our laptops etc. are closer to UTMs than the analog, mechanical, or electro-mechanical machines which were computing well before the "tubes & transistor & chip" types came along.

Moving on to Don's "computator" idea. Again, the electrical engineering of the 20s and 30s just wasn't good enough to support mass-production of earlier transistors.

Okay, I understand that too.

Early transistors would be used for a certain kind of miniaturized luxury (or more likely military) radios.

Or whipped up in university labs for various purposes much like cyclotrons were?

Early transistors are not going to be able to mass-produced enough to really drastically increase the amount of computation able to be performed.

I wasn't suggesting mass production actually. Just more people making and tinkering with them.

As I've said several times in this thread, computer science wasn't developed enough to truly take advantage of transistors, and it'd be too expensive to produce many, anyway. No one's going to be saying "If only your astronomy machine could computer my population figures" if the university can't afford one of these astronomy machines!

Rather than buy one, what if they can build one?

Most of the examples he gave of "computators" were things that wouldn't and couldn't have used transistors. Where in a Jacquard loom would you put a transistor?

You misunderstood that part of my post. I was pointing to various mechanical devices which "compute" after a fashion and predate what we think of as "computers". I certainly wasn't suggesting someone would hand craft a transistor and somehow graft it onto a Jacquard loom even if those looms were still being used in the early 1900s.

Again, computer scientist who actually knows what he's talking about here.

Except when it came to a large part of my post. ;)

I'm just a engineer whose career has involved getting things to actually work and the gulf between what the code monkeys in their cubes believe and what actually works in the field can be quite an eye opener.
 
Wow, Solomon, I can't deny you know your stuff -- and you certainly managed to make me rethink my vision of TTL -- but I'm still eager to read Don's response...


Solomon is almost entirely spot on, especially the maths bits.

As far as the application ends of things, I think once someone begins playing with transistors and getting results, others will follow suit. "Hacking" began with the telegraph and many advances occurred when people using the equipment involved began tinkering with it.

Having seen how people actually use the tools and devices they work with, I have a great amount of respect for technological "hacking".

I didn't want to suggest that transistors would be mass produced, but I didn't specifically say they wouldn't. I do think that, once one researcher attains utility with the devices, other researchers will follow suit according to their needs.

After all, we are talking about a period in which researchers of all types routinely built much of their own lab equipment and thought nothing of it.
 
Okay, I understand that. No bump to the development of the Church-Turing theorem.

The Entscheidungsproblem was the precursor to the Church-Turing thesis. This produces a hard minimum of 1928 for the Church-Turing thesis, and a soft minimum of mid-30s.

I never meant to suggest it was and I'm sorry if I did.

The point was that because a modern computer isn't even an abstraction of a UTM, because of the lack of infinite memory, and because of the slow processor, the Church-Turing thesis isn't immediately applicable to a computer. Especially early computers that have even more speed and memory limitations.

I wasn't suggesting mass production actually. Just more people making and tinkering with them.

You need them to be mass produced. Have you messed around with bread boards and circuits before? Since you say you're an engineer of some sort, I'll bet you have, at least in school. How much could you do with those circuits? A surprising amount; six years or so ago, I built a working phone using some very simple capacitors, transistors, and a few other small things. But can you build a decent computer with a handful of transistors? Not really. You can build neat little devices, but building a transistor version of something like the ENIAC? You're not going to have nearly enough to make a difference, especially if you're merely "tinkering" and not mass producing.

Rather than buy one, what if they can build one?

Computers are far too large an endeavor. If you do have highly specialized machines that are not only cheap enough to be built by a university, and would benefit from a transistor (an important part, since the POD involves transistor development), the architecture wouldn't be anything like an actual computer. It wouldn't be something you'd be able to generalize.

Think of a simple adder (since I don't know what your "engineering" background entails, the circuit diagram is here). Let's say someone connects a bunch of them together to make a 128 bit electrical adder. Let's even assume they use transistors to do it, to fit in with OP. Now generalize that adder to be a modern processor. Does that demand really make sense? Can you take an adder and, with a few simple changes, make it into a Turing-complete processor? Not really. Making it Turing-complete would require you to completely revamp the whole system. It's nonsensical to compare a little adder with a processor. Likewise, it'll be nonsensical to compare the little machines a research professor might use with a general processor, especially since professors are usually very specialized (a little less so back then, but still a generally true statement). A biologist looking at an astronomer's highly specialized astronomical computer and wondering if it could apply to insect population simulations makes little more sense than a biologist looking at an astronomer's telescope and wondering if that could be generalized to apply to insect populations.

You misunderstood that part of my post. I was pointing to various mechanical devices which "compute" after a fashion and predate what we think of as "computers". I certainly wasn't suggesting someone would hand craft a transistor and somehow graft it onto a Jacquard loom even if those looms were still being used in the early 1900s.

...Yes. That's my point. How would the POD, having earlier point-contact transistors, be of any use? It doesn't matter how much a Jacquard loom can do, transistors + Jacquard loom or other such "computators" (a term I dislike because it's kind of misleading) aren't really going to spur development in computational theory.

Except when it came to a large part of my post. ;)

:rolleyes:

I'm just a engineer whose career has involved getting things to actually work and the gulf between what the code monkeys in their cubes believe and what actually works in the field can be quite an eye opener.

My education background was in computer science theory (an important subset of which is computational theory). I've had several classes dedicated to learning about all of this. And the Church-Turing thesis was purely theoretical, and not really tied into the practical applications of computers at the time. It's not something where having slightly advanced hardware would be of any benefit to the mathematicians thinking this up.

I am not a "code monkey." And someone who's just an engineer (what kind? I'm an engineer, too), and not specializing in computer or electrical engineering, doesn't have as much of a background in the theory as someone like me who spent three and a half years studying this stuff.
 
After all, we are talking about a period in which researchers of all types routinely built much of their own lab equipment and thought nothing of it.

And that's fair enough. I've read a little bit about early cyclotrons... Not much, but Feynman brings them up several times in his stories, and has one centered on the jury-rigged cyclotrons he's bumped into. It's fully possible you'd see a transistor in one here and there. I certainly don't deny that! But the expense means it wouldn't be something used with enough frequency to revolutionize computing much and almost certainly wouldn't affect computational theory like Church and Turing's theses. Certain other electronics where a transistor would be more immediately applicable (eg radios and hearing aids). But not computers.
 
The point was that because a modern computer isn't even an abstraction of a UTM, because of the lack of infinite memory, and because of the slow processor, the Church-Turing thesis isn't immediately applicable to a computer. Especially early computers that have even more speed and memory limitations.


We're getting side-tracked on the theorem here. It's very clear that earlier transistors will not speed up the development of it, so let's just drop it, okay?

How much could you do with those circuits? A surprising amount; six years or so ago, I built a working phone using some very simple capacitors, transistors, and a few other small things.

You can build a surprising number of things. Which is the point I was trying to make.

But can you build a decent computer with a handful of transistors? Not really.

You can't build the kind of computer with anything near the kind of capabilities we unconsciously and routinely assign to computers, but we've have rather primitive machines which have "computed" since the Industrial Revolution kicked off.

If those primitive machines were useful, and cam-operated screw machines are so useful they're still used worldwide, what sort of additional utility could we get from a few transistors?

You can build neat little devices, but building a transistor version of something like the ENIAC? You're not going to have nearly enough to make a difference, especially if you're merely "tinkering" and not mass producing.

What would a few dozen transistors do to Bletchley Park's bombes? Or the Huff Duff sets aboard a RN escort?

If you do have highly specialized machines that are not only cheap enough to be built by a university, and would benefit from a transistor (an important part, since the POD involves transistor development), the architecture wouldn't be anything like an actual computer. It wouldn't be something you'd be able to generalize.

Wouldn't the fact that you couldn't generalize such a machine, despite it's utility in the purpose it was built for, provide a slight impetus towards producing a machine which could be generalized?

Think of a simple adder (since I don't know what your "engineering" background entails, the circuit diagram is here). Let's say someone connects a bunch of them together to make a 128 bit electrical adder. Let's even assume they use transistors to do it, to fit in with OP. Now generalize that adder to be a modern processor. Does that demand really make sense?

How many purposes can a general adder be used for? Do you think telephone companies, among others, might have use for one in their switching facilities? Would you be surprised to learn they had a mechanical version of an adder which was a maintenance queen?

Can you take an adder and, with a few simple changes, make it into a Turing-complete processor? Not really. Making it Turing-complete would require you to completely revamp the whole system. It's nonsensical to compare a little adder with a processor.

I'm not suggesting we build Turing-complete or UTM machines with them. I'm suggesting that, because "primitive" machines in the OTL had surprisingly computational abilities for a fixed series of tasks, that transistors could possibly expand the number of machines with those abilities and/or expand the number of abilities those machine have.

Likewise, it'll be nonsensical to compare the little machines a research professor might use with a general processor, especially since professors are usually very specialized (a little less so back then, but still a generally true statement).

The little machine one professor has might spur another professor to make his own machine. And, with all those little machines purring in all those labs, someone might just start thinking about replacing them all with a universal machine. Or is that too far fetched?

A biologist looking at an astronomer's highly specialized astronomical computer and wondering if it could apply to insect population simulations makes little more sense than a biologist looking at an astronomer's telescope and wondering if that could be generalized to apply to insect populations.

Of course, that makes complete sense.

Naturally, the biologist will see the benefits of the astronomer's machine, think about the type of machine he could possibly use, build it if he's able, and then we're right back to the situation I mentioned above: hundreds of specially built machines all being used for special purposes and someone saying "Hmmm..."

It doesn't matter how much a Jacquard loom can do, transistors + Jacquard loom or other such "computators" (a term I dislike because it's kind of misleading) aren't really going to spur development in computational theory.

I'm not interested in computational theory. I'm interested in people using more devices with computational abilities because transistors might make some of those devices relatively more widespread. Practice far more often outstrips theory than the other way around.

I don't like the term "computator" either but I've been using it in an attempt to differentiate computers from machines with computational abilities. The term "computers" is weighed down with too many assumptions. I had hoped that avoiding "computer" and using "computator" would help avoid those assumptions.

I'll use the Jacquard loom as an example because you're hung up on it. That loom had computational abilities. It was programmed via tapes or cards to produce different woven patterns, but just what it could be programmed to do was severely limited by it's physical nature. You couldn't feed it a set of cards and have it calculate asteroid orbits, but you could "hack" it to produce weaving patterns no one had even conceived of when the machine was first constructed.

What I'm trying to point to the "straights" here is that computational 'scale" exists. The UTM is all the way at one end and a device which cannot compute at, a "rock", is all the way over on the other end. When we say "computer" in 2011, we're assuming a device which is much closer to the UTM than the "rock". However, there existed and still exist devices which can perform more computations than the "rock" but no where near the level of computations performed by a computer or UTM. The Jacquard loom was one, screw machines are another, and there are thousands of other examples both current day and historical each sitting along that computational scale.

Fire control systems aboard warships had computational abilities which grew in size and sophistication. How could a handful of transistors change that development? Telephone switching systems had computational abilities which grew in size and sophistication too. Again, how could transistors change that? Ditto encryption/decryption machines. Ditto frequency hopping radio transmitters. Ditto radar, sonar, direction finding, bomb sights, and avionics. The list for military applications alone is nearly endless.

We don't need to build anything that remotely resembles what we think is a "computer" for transistors to help speed the develop of machines with computational abilities. That's one of the things I was suggesting.

I was also suggesting that existence of more machines with computational abilities could spur the development of "true" computers. With practical, everyday uses all around and more equipment to "tinker" with, there would be less need for many of the equipment-less "thought experiments" of the OTL to spur development. Your excellent explanations about the Entscheidungsproblem mean that idea is not plausible.
 
And that's fair enough.


You posted while I was writing. I now know I was able to get my main point across despite my piss poor explanations.

Certain other electronics where a transistor would be more immediately applicable (eg radios and hearing aids). But not computers.

Agreed, not computers because of the Entscheidungsproblem among other things.

There could be more devices with computational abilities however, devices slightly further "along" the computational scale I mentioned than they would be in the OTL.
 
Top