AHC: Terminals and mainframes instead of desktop PCs

How can we get a world in which the evolution of home computers took a different direction?

Perhaps after a brief era of 80s 8-bit home computers, like the Commodore 64, a radically different 90s and 2000s emerge:

Instead of the rapid spread of 16 and later 32-bit IBM-compatible desktop PCs for home use, average home computer users get a "dumb" terminal, essentially a monitor, a keyboard and perhaps later a mouse, with almost no processing power, which communicates with a centrally located mainframe computer that does all the hard work. This could be through a dedicated line through most of the 90s, but it could eventually be routed through the internet by the early 2000s, when broadband internet emerges.

2015 computers in this ATL at first sight are capable of the same things as OTL desktop computers, but on closer inspection it is revealed that they are essentially a monitor, keyboard and other peripherials connected to a Raspberry Pi-esque circuit board, which communicates through the internet with a mainframe that does all the processing.

How can we get a world like this?
 
You can't, a central computer is too expensive for anyone but the rich, and too laggy if kept off the property.
 
I think you need a radical technological POD that radically changes the price/performance curve of current computers.

With micorprocessors cpus the curve is a high base logarithm: after the optimal price/performance point performance increases linearly and price exponentially.

Microprocessors basically reverses scale economics for CPUs. Now with high bandwith networks one may combine many microcomputers in warehouse scale clusters to serve milions of personal devices. Edit: Mind you the effective compute capability of two networked computers is still less than sum of the two (but a 2x compute power computer will cost you 10x times).

An idea? early development of extremely pricey and performance (currently sci-fi) quantum computers. Each qbit costs a constant but adds exponentially (2x) to the processor performance. One processor with n+1 qbits has double performance than two n qbit processors.
 
Last edited:
How can we get a world like this?
The computer industry is trying very hard to move us in that direction today with things like cloud computing.

I think an important step would be much earlier development of networking infrastructure. By the time home computers crop up, the server/client model is entrenched in defence, research and commercial applications. A computer in your house would offer less capability than just hooking up to a computer service provider's network at greater cost and inconvenience.

Only people wanting to perform low-latency tasks like computer games would pay for an in-home computer. The desktop terminal would be used for 'serious' work, the home computer would essentially be an equivalent of OTL's games consoles.
 
Do you want to go back to the dumb terminals and mainframes dark ages? fear not, phones/tablets and cloud computing are exactly meant for this purpose.
 

Delta Force

Banned
The economics of computing should favor decentralization at some point. Electricity systems are centralized and have seen the cost of power decrease by a small percentage point each year. Now there is talk of moving it to a decentralized approach. In contrast, computers go through much faster cost reduction cycles of 41.5% per year.

The paradox with this is that economics should lower costs such that a centralized approach can become decentralized. I don't see how economics can change such that it makes more sense to go from a decentralized system to a centralized system if it originally cost less to be decentralized and system costs are still falling. Presumably communication costs have fallen more than 41.5% per year, because that would be the major issue with a decentralized computing system. Alternatively, it might have been that consumer communication systems weren't developed enough to allow for a centralized approach to be used outside of offices, and now that those networks exist it's actually the better approach.

It would appear that you either need to have better communication systems outside of offices or have communication costs fall faster than 41.5% per year for centralization to make sense.
 
Bandwidth couldnt support centralized computing until, oh, the last few years. Contrast the speed of an old dial up modem to a cpu in 1988 or 1998 and there is no comparison. Most of those mainframes and minicomputers were tied to local networks on site of the company at least through an institutional infrastructure rather than residential. Try running Adobe Photoshop or MS Excel over a modem back in the day. Even if you could create the interface, it would not have worked well.
 

marathag

Banned
How can we get a world in which the evolution of home computers took a different direction?

Perhaps after a brief era of 80s 8-bit home computers, like the Commodore 64, a radically different 90s and 2000s emerge:

Instead of the rapid spread of 16 and later 32-bit IBM-compatible desktop PCs for home use, average home computer users get a "dumb" terminal, essentially a monitor, a keyboard and perhaps later a mouse, with almost no processing power, which communicates with a centrally located mainframe computer that does all the hard work. This could be through a dedicated line through most of the 90s, but it could eventually be routed through the internet by the early 2000s, when broadband internet emerges.

Dumb terminals lost out, as at 300 baud, there were limits. Many people can type faster than 300 could update the screen. No graphics besides ASCII

even 1200 baud modems were taxed by simple block graphics.

Dial up dumb text terminals than can make a 'beep' will lose out to Amigas doing full graphics and sound in 1987

Then you get into long distance charges, unless you were in large metro areas.
Compuserve was still charging $10/hour, ontop of that.

A dedicated 56K line back then was several hundred dollars a month

The Ur-Internet back then was all text based and slow FTP transfers


In 1990, nearly all modems were still 2400s, but things started to move, with the 14.4k in 1994, then 28.8k in 1996, and the internet took off along with the Pentium 66mhz PC, replacing the 486 PCs


No, PCs will continue, Big Iron will fall, only to be reborn for the Internet in the 2000s.
 
"Instead of the rapid spread of 16 and later 32-bit IBM-compatible desktop PCs for home use, average home computer users get a "dumb" terminal, essentially a monitor, a keyboard and perhaps later a mouse, with almost no processing power, which communicates with a centrally located mainframe computer that does all the hard work"

I thought that was called an Ipad *ducks* ;-)
 

Perkeo

Banned
An actual PC is hardly more difficult to build than a terminal, and a fast enough dialup connection to transfer all output from the mainframe to the local terminals at a competitive speed of a state-of-the-art PC is well above what was technically feasible - let alone affordable - at any time.

There are technical reasons why computers went local.
 

Delta Force

Banned
Keeping in the spirit of alternate history, I've found two threads where I looked at something similar to this in two very different contexts that might make centralization more feasible.

The first is avoiding the 1956 court ruling that barred AT&T from entering the computer market. If not for that perhaps AT&T could have built its own system in a manner akin to how the electricity system or ARPANET itself developed. AT&T could team up with IBM or another firm in a joint venture to provide centralized computing, with AT&T leasing terminals connected to mainframes in a local server farm and/or the office building. Customers would purchase mainframe computing cycles, and they could also purchase long distance communication options such as electronic mail service.

The other option is having the Soviet Union decide to embark on its plans for computerized communism.
 

marathag

Banned
The first is avoiding the 1956 court ruling that barred AT&T from entering the computer market. If not for that perhaps AT&T could have built its own system in a manner akin to how the electricity system or ARPANET itself developed. AT&T could team up with IBM or another firm in a joint venture to provide centralized computing, with AT&T leasing terminals connected to mainframes in a local server farm and/or the office building. Customers would purchase mainframe computing cycles, and they could also purchase long distance communication options such as electronic mail service.

In 1962, a 2400 baud full duplex modem cost as much as a house. A big one.

Then you have what Bell charged for the dedicated line every month.

Only faster links in the country was for connecting the pair of computers as every SAGE air defense command node to each other.


The 'affordable' part of data networking was from the wire services that could go over most existing lines was the '66' speed lines, that worked out to 50 baud for teletype News printers. Or 66 words per minute.

'Big Iron' was very expensive for the mainframes.

But it was still cheaper than long distance timesharing, till the late '60s.
 
Keeping in the spirit of alternate history, I've found two threads where I looked at something similar to this in two very different contexts that might make centralization more feasible.

The first is avoiding the 1956 court ruling that barred AT&T from entering the computer market. If not for that perhaps AT&T could have built its own system in a manner akin to how the electricity system or ARPANET itself developed. AT&T could team up with IBM or another firm in a joint venture to provide centralized computing, with AT&T leasing terminals connected to mainframes in a local server farm and/or the office building. Customers would purchase mainframe computing cycles, and they could also purchase long distance communication options such as electronic mail service.

The other option is having the Soviet Union decide to embark on its plans for computerized communism.

Since you love technology, I suggest you read Clayton Christensen's Innovator's Dilemma. You will quickly realize why AT&T and IBM were probably the companies least likely to develop an internet.
 
You can't, a central computer is too expensive for anyone but the rich, and too laggy if kept off the property.

One of the reasons universities & other institutions moved away from the central processor/satelite terminal model. I remember Purdue University closing its satellite terminals in the early 1980s as desktop machines appeared. Also recall how everyone thought it so cutting edge that all the desktops machines on campus could in theory be interconnected so data or files could be exchanged directly without carrying floppy disks around. Not all were, but the vision of the entire campus connected in a single web was in reach. Real visionaries were talking about a nationwide web, but that seemed really SciFi.
 

Delta Force

Banned
Telephone switching stations didn't even become electronic until the mid-1960s, and the microprocessor wasn't invented until the early 1970s. Interestingly, while the MP944 was developed by Garrett AiResearch for the Grumman F-14 Tomcat, the Intel 4004 and Texas Instruments TMS1000 were developed for calculators. All three programs started in the late 1960s. Perhaps a communications company, could request the development of a microprocessor for its switching stations? That would lower communication and computer coats while improving the performance of both.
 
Top