I’ve got round to reading the New Scientist‘s 60th anniversary issue, published in November, which tries to look forward in the general direction of 2076. There are 14 short “What If…” essays, on everything from “What if we engineer new life forms?” (we’ll need a ‘kill’ switch) to “What if we found a theory of everything?” (it’s a very slow train coming) to “What if we discover room temperature super conductivity?” (it would utterly transform our energy systems).
In this post I’m going to review some of the essays on themes that futurists spend more time on, and pull out some of the ideas.
1. What if we create human-level artificial intelligence?
Toby Walsh, who’s a professor of AI at UNSW Australia, starts by saying that in line with other researchers, he thinks we’re 30-40 years away from AI achieving superhuman intelligence. But he’s sceptical of a singularity, for a number of reasons I haven’t seen rehearsed as clearly elsewhere.
- The “fast-thinking dog” argument: “Intelligence depends on … years of experience and training. It is not at all clear that we can short-circuit this in silicon simply by increasing the clock speed or adding more memory.”
- The anthropocentric argument: “The singularity argument supposes human intelligence is some special point to pass, some sort of tipping point… If there’s one thing we should have learned from history is that we are not as special as we would like to believe.”
- The “Diminishing Returns” argument: “The performance of most of our AI systems so far has been that of diminishing returns. There are often lots of low-hanging fruit at the start, but we then run into difficulties looking for improvements.”
- The “limits of intelligence” argument: “There are many fundamental limits within the universe. … Any thinking machine that we build will be limited by these physical laws.”
- The “Computational Complexity” argument: “Computer science already has a well-developed theory of how difficult it is to solve difficult problems. There are many computational prodlems for which even exponential improvements are not enough to help us solve them practically.”
Walsh concludes, however, that even without singularity AI will have a large impact on the nature of many jobs, and a significant impact on the nature of war. “Robots will industrialise warfare, lowering the barriers to war and destabilising the current world order.” The answer: we’d “better ban robots in the battlefield soon.”
Walsh has a book on AI out later this year.
2. What if we crack fusion?
The joke about nuclear fusion, for as long as I can remember, is that it’s always 50 years away. That might be changing, though perhaps not. In 2035, if everything goes to plan, the ITER research project is scheduled to produce 500 megawatts of energy “for a few seconds,” which would make it the first fusion reactor to produce more energy than it consumes.
Even if that succeeds, there are still significant technical problems. And as Jeff Hecht notes in his article, if these are overcome, it seems that nuclear fusion won’t be the “too cheap to meter” energy that we were promised in the 1950s. Fusion reactors are vastly expensive to build, even if operating costs are modest. Nor are they carbon neutral, because of the carbon costs of construction, fuel production and waste management, and there is also radioactive waste to deal with, although the decay time is decades not millennia. But the nature of the technology and its cost base means that even if it works, it’s still going to be used for baseload power. Peaks may have to be managed through renewables and storage. But it seems as likely that come 2076 nuclear fusion is still 50 years away.
3. What if we re-engineer our DNA?
Michael LePage has a little 2021 scenario in which a Japanese boy is born to an infertile father after fertility specialists have played with his DNA using CRISPR genome editing. More follow elsewhere in the world, depending on local regulation and cultural attitudes. Why would parents opt for genome editing rather than cheaper pre-implantation diagnosis (PGD)? Because germline genome editing can make dozens of changes at the same time, rather than a few.
And why stop there? There are beneficial gene variants that make people immune to HIV or less likely to become obese, for example. Perhaps as soon as the 2030s, some countries may allow these variants to be introduced…
[G]enome editing can definitely make individuals less prone to all kinds of diseases. And as it starts to become clear that genome-edited children are on average healthier than those conceived the old-fashioned way, wealthy parents will start to opt for genome editing even when there is no pressing need to do so.
On the other hand, we likely won’t be gene editing to improve personality or intelligence: “we have yet to discover any single gene variant that makes anything like as much difference to IQ as, say, having rich parents or a good education.”
LePage’s 60-year projection: states will pay for genome editing for public health reasons, because the savings on lifetime health costs will far outweigh the cost of the treatment.
4. What if we end material scarcity?
This future is hard to imagine, writes Sally Adee, because scarcity is the basis of our current dominant economic system. But some people have started on this: Jeremy Rifkin, for example, in The Zero Marginal Cost Society, which describes a world where the cost of producing each additional unit of anything is all but zero. In the future, in other words, everything will look like the current music and publishing industries.
The critical technolgies are fabrication devices that are highly sophisticated versions of our present 3D printers. Within 60 years time, these could be molecular assemblers (Eric Drexler’s phrase), working at nano scale, which could “produce any substance you desire. Press a button, wait a while, and out come food, medicine, clothing, bicycle parts or anything at all, materialised with minimal capital or labour.”
Rifkin thinks that fabricators will be the engines of a sharing economy, in which access replaces ownership; “purchases will give way to printing.” Rifkin thinks that within 20 years “capitalism…. will share the stage with its child.” In this future, says Adee, “You will have a job, but not for money. The company you work for will be a non-profit. Your “wealth” will be measured in social capital; your reputation as a co-operative member of the species,” although it’s worth remembering that Cory Doctorow has visited this future (pdf) and it didn’t turn out well.
Your reputation points? They might go on an antique chair that wasn’t built by a fabricator, which might be a sign of status in such a world.
5. What if we put a colony on Mars?
So the first set of non-trivial problems, according to Lisa Grossman, is that settlers would need to launch from earth everything they need to set up the first base: “tonnes of life-support equipment, habitats, energy-generation systems, food and technology for extracting breathable oxygen and drinkable water from the air.”
The second set of problems: The alignment of the planets means that although the shortest journey time is around five months, we’ll only get 22 opportunities for that short journey between now and 2060. Landing on Mars is also tricky because of the combination of gravity and thin atmosphere: the heaviest craft that’s landed successfully is the 1-tonne Curiosity rover.
The third: It’s quite a hostile place: “high levels of radiation, the threat of solar flares, dust that covers solar panels and could rip through lungs like of shards of glass, and temperatures as low as -125℃.”
In short: “there is nothing to do there except to try not to die… The first settlers will be dependent on the home world for a very long time.” But hey; the settlers will be in constant communication with earth. We will be able to watch them succeed or fail almost in real time.
6. What if we have to rescue the climate?
Hoovers and sunshades. We’ll have turbines that suck CO2 out of the atmosphere and ships dumping minerals into the sea to reduce acidification, but that’s just the start of it, according to Catherine Brahic. Because up in the atmosphere–10 to 18 kilometres up–we’ll have a fine spray of particles to shield Earth from the sun and keep us cool.
While it sounds manageable in theory, we don’t really understand it. The best researched approach involves spraying fine particles of sulphate into the atmosphere, but this creates regional winners and losers. Northern Europe, Canada and Siberia would remain warmer, the oceans cooler; there would also be regional rainfall effects, with monsoons potentially drying up.
So the whole thing needs some kind of global or multilateral council to arbitrate. And the sunshade needs to be replenished constantly. If we stopped spraying (because of an international disagreement, say) the temperatures would climb in a decade or so to where they would have been without geoengineering.
7. What if there’s a nuclear war?
TL: DR? It’s bad, really bad.
Even a regional nuclear war has terrible results. For example, if India and Pakistan let off half of their relatively small nuclear stock (or a hundred Hiroshima-sized bombs), according to simulations by Alan Robock and Michael Mills, quite apart from the millions of deaths on the sub-continent, “the fires would send about 5 million tonnes of black smoke into the stratosphere, where it would spread round the world. This smog would cut solar radiation reaching Earth’s surface by 8 per cent–enough to drop average winter temperatures by a startling 2.5 to 6℃ across North America, Europe and Asia,” for five years to a decade. As Fred Pearce writes, the Asian monsoon would collapse, destroying Asia’s water system; much of the ozone layer would be removed; and near ice-age temperatures would shorten growing seasons catastrophically. In short,
Nuclear winter would deliver global famine.
And that’s just from a regional nuclear war.
It’s worth ending with a couple of notes from New Scientist‘s editor in chief, Sumit Paul-Choudhury, in his introduction to the whole section, looking back 60 years as well as forward.
The internet, global warming, artificial intelligence and genetic engineering were all on our radar in 1956. But our ideas about how they might pan out bore little resemblance to how they have actually evolved, particularly when it comes to their social ramifications. Ubiquitous information has not created rationalist utopias, ecological catastrophes have not culled our population, and neither have super-human machines nor people, although we’re getting there.
Although the tone of the introduction is over-interested in “prediction”, he takes that scepticism into looking forwards.
Linear extrapolation inevitably fails: it’s the kind of thinking that leads people to jokily ask, “Where’s my jetpack?”, a question borne of post-war trends in transport and the space race–none of them relevant today… prediction and extrapolation are of limited use: fine up to a point if you need to place semi-conductor orders, perhaps, but not so much if you want to work out how semiconductors are changing society.
The image at the top of this post is by Andrew Curry, and is published here under a Creative Commons licence.