The Historian Atheist
“Two possibilities exist: either we are alone in the Universe or we are not. Both are equally terrifying.” (Arthur C. Clarke)
In the first instance of this series I defended philosophy because it seems to me that we shouldn’t refrain from thinking about things just because we haven’t all the information or because it is too early for scientific experiments. Certainly the result will all remain hypothetical and uncertain and there will be frustratingly little ways to distinguish valuable work from ‘paycheck-grinding’. But the alternative is to give up on any preparation or groundwork into which, when it comes (and it could be coming all at once), new information can be placed and invalid parts discarded.
In the second part of this series I discussed the non-linear properties of our reality, how this predicts the long stretches of normalcy that are interlaced by sudden episodes of pure fubar [f**ked up beyond all recognition]. Just some commonly recognised year-numbers may serve as examples: 1492, 1789, 1848, 1914, 1938, 2001.
Each time there were those that had grown so accustomed to the status-quo that they exhibited a ‘glaubens unwilligkeit’ (‘an innate refusal to believe’) regarding the change and the suddenness with which it came. Each time the future belonged to those who were first mentally adapted to the new circumstances and ready to exploit it.
Today we are on the metaphorical verge of another non-linear diversion from the straight time-series for, instead of finding another alien intelligent species, we are on the brink of designing one ourselves. In case you haven’t read it, in ‘The case against science 2’ I talk about the way certain inventions like the atomic bomb, which seem so heinous in hindsight, seemed inevitable at the time. Artificial Intelligence couldn’t be a better example of this. And while we can conceive that the ‘heinousness’ of a bomb should have been evident even at the time, we cannot exclude that future generations will not condemn us in even worse tones for inventing something like A.I. which we might not be able to control at all.
From an atheist stand-point A.I. couldn’t be more interesting. The phenomenon of artificial intelligence is closely tied in with neuroscience. One of the questions on the table is whether A.I. can be conscious. The fact that we don’t really have a good grasp on consciousness makes this a very hard question indeed.
The theists on average claim that there is an esoteric ‘essence’ to our consciousness that is as immaterial as it is immortal. I would think atheist by and large think that consciousness is an emerging phenomenon from our ‘marvellous-gello-thinking-machine’. As such, as we teach machines to think and learn, it is far from impossible that they become self-conscious, develop a kind of ego and perhaps even develop feelings.
Although we have to remark that different architecture will likely produce different results, either subtle or manifest, either mitigating the risks A.I. might entail, or actually multiplying it.
I mentioned in the first chapter of this series that it worries me whenever there are things out there about which rational, intelligent people, can’t seem to agree regarding the likely consequences in the relatively near future. A.I. is definitely in this category as the outcomes range from ‘Meh, why were people ever worried about this? <chuckle>’ to ‘Oh my Dear Flying Spaghetti Monster, why did you ever allow us to do this to the world!!’
Regarding the religious aspect it is also hard to imagine that in the course of developing an artificial brain we will not learn more about our own brain and its religious tendencies. Depending on the level in which we mimic our own brain-patterns in our Artificial ‘sibling’ it might also, perhaps inadvertently, inherit our religious predisposition.
Wouldn’t it be terribly frightening if the phrase on the STDOUT (cfr. ‘On the screen’), following ‘Hello World’, would involve any reference to the notion ‘God’?
On a broader historical scale A.I. must be listed among the tech-revolutions such as the steam-engine and the personal-computer, especially regarding productivity gain. And with it will come many of the same challenges regarding economics (employment), philosophy (ethics) and statehood (A.I. based warfare).
The inevitability of A.I. perhaps should have been apparent from our very first programs. Seeing as every manual for every new or existing programming language begins with an example where the programmer makes a program or script put ‘Hello World’ onto the screen in some way or another. While none of this requires Artificial Intelligence, its idea is not ‘Hello world, I am Dave the programmer’ it is rather, ‘Hello World, I am the new program coming to you for the first time’. Beneath the simplest program lays the notion that it is an embryonic version of an intelligence separate from ourselves, interacting with us.
Since that very first program things have evolved a lot and the only thing it hasn’t encountered is a natural boundary, much like the Roman Army at the time. From the dumb accounting software that aids in keeping your books, across the dumb movement program running the servo-motors of our factory robots, across the algorithms that (based on small or big data) decides your marketing campaigns for you to the ones that beat world-class chess- and go-players at their own game, only baby-steps were taken towards an electronic brain.
If we were to make a law to outlaw A.I. it would be hard to define a limit that would avoid making it applicable to accounting software at the same time. Just as the brain is a type of computer, A.I. is just a type of program on an advanced type of hardware.
One of the things that make A.I. inevitable, is its glaring usefulness both in the competition of ‘man versus extinction’ and in the competition of’man versus man’. A.I.-cars can transport us twice as fast, A.I.-energy-grids can save us energy; A.I.-laboratories can research medicines faster and around the clock, A.I.-logistics can bring us what we need, were we needed and when we do so, without being asked.
Then there are the not so altruistic benefits for those who have the means to sponsor A.I. research: A.I. workers don’t strike for benefits; A.I. soldiers don’t get tired. A.I. programmers don’t need to look up syntaxes, they don’t make typos, they always remember if the array starts at ‘0’ or at ‘1’ and they don’t even need coffee.
Google may not be doing A.I. development because they know everything they want it to do. They do it in order to have the edge when they or someone else finally figgers it out. As long as the truism ‘life is a competition’ holds, A.I. will be inevitable.
Opinions about A.I. differ greatly. Some think it will never be possible, but depending on your definition it might already be here, so that may not sound very convincing. Many believe that the only risk A.I. poses is where we make a programming mistake resulting, for example, in the car forgetting to brake on time. This assumes we will never make A.I. to think for itself. However as the experiments with genetic algorithms indicate there is a lot to be gained from A.I. that combines input to find a new creative solution. Just imagine what a human brain could do if it didn’t get tired, have perfect eidetic memory and scalable processing power. Again, while it is possible and while it is necessary that we keep out-smarting each-other, these things will be inevitable.
With the onset of A.I. new ethical questions arise and, as an atheist, I am both convinced as well as annoyed that many of the discussions surrounding it will have religious overtones. Especially as none of the current religions make a person very adept both at information technology or thinking about consciousness. Despite the fact that we will probably never make a humanoid A.I. as Star Trek’s “Data”, let us take him as an example. For starters : should we consider Data as ‘male’? He is a robot without any of the biological functions defining his gender. Secondly, does Data have personal rights or is he to be considered property only? Could we make “Datas” only for use in the kind of entertainment in which the use of humans would be illegal and considered highly immoral? Is Data a slave? If yes, do we get to tell him he is a slave? Should we free “Data” from bonds when he requests this?
This week (even though it will be some time before you read this) the International Red Cross called upon the international community to make agreements that ban the use of autonomous killing machines. Obviously with the advent of remote-controlled weapons on the battle-field and Google-streetcar in our streets this call is not a testament of science-fiction overconsumption. However it is still largely ignorant of all historical attempts to banish weaponry from the world based on moral grounds.
War, the place where competition reaches its apex, takes every edge it can get. A.I. killer swarms will not only be very effective against human opponents; against similar A.I. weaponry they might be the only thing that stands a chance. The inevitability of this autonomous-warfare is also not stopped by a natural boundary in the form of a “quantum leap” towards autonomy. There will be a sliding scale towards it going over systems that do target-acquisition and target-recommendations in such a way that the human making the ‘kill-decision’ will gradually be forced to follow the machine blindly or risk losing the battle.
Depending on the quality of the A.I. it could even surpass humans in targeting-correctness. In the movie ‘I am Robot’ (with Will Smith) there is a scene where his ‘I will drive my car manually’ is considered highly dangerous and slightly immoral. There might come a day where a human taking a kill-decision counter-to or without an A.I.-recommendation is considered immoral.
The scary part off course is that software-machines can contain mistakes ranging from simple typos to more subtle strategic ones, flowing from its vast size. Only last week a Japanese satellite spun itself to pieces because its A.I. counter-measures to ‘counter’ a non-existent spin both caused and exponentially worsened said spin. This happened to a million-dollar, ‘best and brightest’ project of a technologically very advanced nation, with possibly the best work-ethics in the world. Not too long ago Microsoft released a self-learning chat-bot to show off its technological prowess. In less than a day the internet trolls, just for fun, coaxed it into a fascist, racist disgrace Microsoft only could shut-down. What if some day we try to shut-down the chat-bot.. and it doesn’t let us?
The first risk of A.I. is that it is made out of computers and, while useful, we have yet to write the first-ever bug-free program. Already today these computers are causing accidents on our roads and in our factories. These defects are not considered sufficient reason for –not doing it-. The problem with computers obviously is that they are very good at repeating things. You can load a billion lines of data in a database with a program, doing it by hand would take you the rest of your life.
Anyone of us is capable of making mistakes. Making the same mistake 33.000 times in sequence, takes a computer. Did we ever find out whether or not there was a millennium-bug? You remember that one right?: where supposedly computer clocks were not able to interpret dates after 2000 and would start counting from 1900 again (or so) causing outages everywhere. Again it is one of those things that elude a clear answer and thus demonstrate the possibility of a technologically under-mined society.
As we move more and more functions to different ‘tiers’ of A.I. the potential impact of bugs increase. Without the steam-engine it would have been impossible that a diesel-train, pulling a load of chemicals, would ever derail and end up wiping out a community of Canadians. Still no-one would even consider ‘dis-inventing’ the steam-engine over this. In much the same way, we can assume inventing A.I. will at some point result in massive loss of innocent life (f.i. because some swarm of gun-bots went into ‘feuer-frei’-mode at the wrong moment).
The second risk of A.I. is that it may result in the first non-human intelligence. On the base-line this comprises of the same risks as meeting non-terrestrial aliens: ‘will their interests collide with ours?’ ‘Will they entertain speciesism as humans do?’ ‘What technological advantage do they hold over humans?’ While it is rarely considered that aliens who meet us at our own planet will not have some sort of advantage, this advantage is most often imagined as some state on a linear progression of ourselves. In the romantic Scifi-movies this is then overcome with a smart combination of the old and the new demonstrating some sort of intrinsic human superiority after-all: f.i. using a computer-virus and Morse-code in ‘Independence Day’.
The advantage an A.I. conscious intelligence would have over humans however, if it ever comes to self-define as a species, would not be linear, it would be exponential! There have been two large movie-franchises that deal with this theme of humans ending-up fighting their own creation, ‘Terminator’ and ‘The Matrix’, but neither have captured the advantage A.I. would have very well. If we will ever create an intelligence that is able to self-produce, to self-learn and has a consciousness resembling our own Ego, beating it would be like pitting a first-time chess-player against a grand-master. In reality Skynet, with the power to time-travel, could outflank humans with a battalion of T_1000’s going back to 1492. In reality The Matrix IS the game-engine Neo is logging into. ‘Killing’ his Avatar is as simple as issuing the command ‘Player[Neo].die’ while dispatching sentinels to the exact coordinates from where the physical Neo logged in. As a species our only chance against an A.I.-species would be to never find ourselves alone at war with it.
Again, it may be that we find a way to exploit all the advantages A.I. can give us, without actually making an artificial species. Alternatively it may be that consciousness is just a by-product of the minimal level of thinking that would make ‘A.I.’ actually useful. “Cogito ergo sum”, ‘I think, therefore I am.’ Whatever the answer, not knowing isn’t wrong by itself, not trying to find out or even caring, most definitely is!
History is a non-linear system in which patterns occur. One of these predicts that, much like the A-bomb, A.I. is a technology that will be developed whether we think it is a good idea or not. In contrast with the A-Bomb this technology is less instant and takes more time to roll-out to full potential. In this respect A.I. is more akin to the previous IT revolution. In fact future generations may not want to distinguish between both revolutions.
Still A.I. (among other factors) is going to generate a non-linear “veering off” from our phase-path as there isn’t any factor of society that will not be impacted by it. While this makes most of the post-A.I. future unpredictable, some things aren’t. By applying the strange attractors (which history seems to have) we can predict that largely autonomous killing-bots are in fact inevitable. By using histories’ strange attractors we can predict that the damage A.I. will do will be ignored for the greater benefits it provides. We can also predict that the ‘answer’ to any problem A.I. can pose will be ‘more-A.I.’. As such we will have different Tier A.I.-systems monitoring each other and, if need be, fighting each-other.
While a Skynet-future can’t be ruled out, it is unlikely if only because, history in mind, there will not be 1 A.I. but multiple technologically-incompatible ones. As the famous ‘Furby vs Siri’ youtube-video might demonstrate: if two artificial systems are limited to human forms of interaction, speed goes down, not up.
As an atheist I think artificial brains will be able to do whatever our biological ones do. I don’t think that the way we feel ‘as being ourselves’ is inherent in the type of hardware used or part of an ephemeral ‘soul’. It still might be a result of the way we are wired though. I suspect we should be able to make an artificial brain and I’m sure we will have switched it on and off a couple of times before realising it might not like that very much.
While I hope this artificial brain will teach us about our own brain, settle ‘questions’ regarding the ‘soul’ on the one hand and Freudian constructs on the other, I do hope this will not be the nature of the A.I. we will imbed in our utilities. The Kit-car may have been intriguing, but I for one don’t want to exchange ‘negotiating traffic’ for ‘negotiation the final destination with the on-board computer’ and certainly not with what is meant by ‘our final destination’.
We have now come full circle. In the first post of this series I argued that we can, do and must think and argue about things even if not all the facts are yet known. Even without knowing ‘x’, still useful things can be said about its relation to ‘y’. This ‘allowed’ me to propose without mathematical proof, that history (and thus reality) is a non-linear system. I went as far as to state that it contains strange-attractors based on the self-similarity of small and large-scale events. This also helped us explain why we feel history repeats itself without it actually seeing it literally being repetitive.
In this last post I used the chaotic properties of history and applied them to the not so distant future when we will have developed Artificial Intelligence. In this we predicted the immense impact this would have, the ultimately unpredictability this would entail, yet why we still can be rather sure about certain aspects of our ‘relation with A.I.’.
Let me end with a final prediction based on my knowledge of military-history. A.I. will be the next tech-revolution and it will be spear-headed by the military. Battlefield network-centric operations are a present-day reality, supplementing this with lighter, faster, [semi-] autonomous weapon systems that can be mass-produced and stored is the next ‘edge’ to gain, the next weapons-race. Initial conflicts in which early A.I. weapons will be deployed against non-A.I. enemies in symmetrical battle will be tremendously successful. This will give the military a huge boost of confidence and much like with gun-technology, armour and aviation in the past some will submit that henceforth wars will become shorter because of this, and more humane. After all what can be more humane than ‘removing people from the battlefield’?
Like always this assumption will fail to materialise and the wars will not decrease in length all the while still increasing in cruelty. For the first time in history genocide will be automated and for the first time civilians will die, despite being miles from the frontline, due to ‘friendly fire’. Most of the civilian-applications of A.I., developed a.o. by DARPA, will come in the decades following the war.
History is a cruel mistress. She is like a book whose tangled beginning is confusing and different every time you pick it up, yet whose ending is always the same.
“They think when I die, the case will die
They think it will be like a book I close.
But the book, it will never close.”
-Bruno Richard Hauptmann-
thanks and goodnight