Member-only story
I’m serious. Name one thing in computing that showed up after 1978 that wasn’t either an incremental improvement on a pre-1978 technology or a crappier but cheaper version of a pre-1978 technology. I’m not trying to produce sophistry here. There’s a huge difference in the level of novelty of original research in computing tech during the span 1940–1980 and the level of novelty in the same after 1980, & it relates directly to economics.
From 1940 to 1980, computer science was being done by folks with doctorates & experience in other disciplines, funded by government money to do pure research & moonshot shit — especially ARPA funding starting in the wake of Sputnik for ed-tech. When that funding dried up, so did productivity in original research, because the ability to continue to be employed depended on profitability in a consumer market (which means racing to market… which means avoiding risky detours).
The exact same people have drastically different productivity levels with the two models. Kay at PARC in the 70s went from having seen a sketchpad demo to having a complete functioning live-editable GUI with network transparency in 10 years, because of government ed-tech money. In the early 80s, Kay moved to Atari & tried to continue the kind of work he had been doing. And then he got laid off, and went to Apple, got laid off again. The work he started in the 70s has been treading water because short term profits can’t support deep research.