If Our Robots Weren’t Atheists

0


Humanity on the wrong side of evolution

Atheist frequently consider ‘religion’ a sticky relic from the past and look upon the technologic present and the anticipated techno-future as if it contains the antithesis for it. But what if we are wrong to think this way?


image of a neural net with variously weighted edges
Representation of a simple neural-net with diversely weighted edges between nodes.

Technology has always been on our side. Our side, the atheist side.
Ever since Galileo Galilei used a telescope to study the phases of Venus in support of heliocentrism, technology did, if not prevent people from making religious claims, at least prevent those things from being universally accepted when they contradicted observations (invariably aided by the most advanced technology of the day). Take away technology from the direct environment of Isaac Newton and all that remains is a very bizarre religious fanatic and an irrational alchemist. It is therefore no coincidence that the notion ‘objective observation’ contains the word ‘object’ in it. Because it is very much by using the technological objects, the rulers; the particle accelerators; that we replaced erroneous intuitive ‘subjectivity’ with correct knowledge.

It is, for instance, with the use of these objective ‘rulers’ that we came to recognise climate-change as a real phenomenon, even though the effects are still unnoticeable to human senses on large sections of the planet. Climate-change however is also one of the starkest reminders that technological advancement isn’t only ever a positive concept and can produce severe problems on its own. Still, since hardly anyone proposes to return to an age where a microscopic bacteria almost wiped humanity off the planet, the prevailing sentiment is that whatever problems are caused by technological advancement, they must be dealt with by expanding on our knowledge and creating new technologies in the future. ‘Technology’, in that respect, may not exactly be an entirely free-lunch, but I for one was long convinced that with the right attitude, some tough choices and discipline humanity could postpone payment of the bottom-line indefinitely. ‘Progress’ as it was called, suggested a pattern of infinite unstoppable regression.

With Stephen Hawking, Elon Musk and of late also Sam Harris I am today in highly esteemed company; thinking that our future relationship with technology may not necessarily continue along a straight line in which we are the sole recipients of technology’s goodness. In fact, it has become thinkable, although both controversial as well as disputed by no less esteemed authorities, that technological progress may one day continue independent from humanity through artificial intelligence. If in that future the model of infinite regression would then still be subscribed to, humanity could end up being considered as intelligence’ short but indispensable biological precursor.

To be fair, the concept of artificial intelligence contains enough unknowns that none of this reaches beyond the speculative phase. It is a philosophical discussion. Neither must we understate the immense route that still needs to be travelled to even have artificial intelligence be as mentally capable as a three years old child. Be that as it may, we must become more acutely aware of the weakness of the arguments saying “A.I. will never match our own intelligence”. The most popular counter argument stating that we can’t predict or build A.I. when we don’t understand human intelligence or awareness. This is very unsatisfactory. We should be very surprised, given the moral restrictions and biotechnological impediments to mechanically intervene with human brains, to ever know as much about human intelligence as we do and will about artificial intelligence. This did not prevent us from learning enough about the human brain to apply in artificial neural-nets which has led to results that were unthinkable only five years ago.

Artificial brains open the door to a process of “trial and error”, assuring that almost by accident we”d have to learn a great deal about it and accumulate unimaginable functionality. It is doubtful however that we could learn enough to exclude unintended consequences. And here comes the crux: even if we”d succeed in copying the human brain to very high precision, we already know this to be an exceptionally sensitive system with flaws that could prove potentially unsafe to pass-on to our artificial offspring.

In less than a decade we have evolved from a deterministically programmed digital camera, taking a picture of a leopard to a computer analysing a picture of a leopard saying ‘ This is a picture of a leopard sitting in a three’. At the same time deterministically programmed algorithms have beaten the best of our species at any game involving abstract reasoning (chess, go); while IBM’s Watson also beat us in a general knowledge game, played in a human language.

In the medical profession neural-nets, studying structured case-files not only beat experienced doctors 100% of the time in predicting the outcome of diagnostic tests. Making neural connections between symptoms and patient-history. These self-thought algorithms occasionally even predicted a correct diagnosis based on patterns which human doctors couldn’t even see with hindsight.

So far no convincing arguments have been raised for why artificial life would be impossible; while those in the field, like researchers at DARPA (search video on youtube, ‘DARPA + A.I.’), see only incremental steps down the path already headed on. A path with no logic impediments in sight. The denial of an A.I. future feels therefore more rooted in a strange mixture of arrogance and modesty, neither of which are rational. The arrogance flowing from presumed immune [divine] human dominion over this planet, combined with the modesty stemming from a presumed [divine] rule that prevents mankind from emulating God. While the A.I.-cautious continuously look at the facts (perhaps with pessimist glasses on) and the combine these with trends; the A.I.-optimists seem focused on the short-time benefits and the possibility to correct after the fact, just as we have done with any prior technological breakthrough so far. I feel that the latter attitude, in the case of A.I. specifically, could very quickly lead us in both moral- as well as existential- peril!

In his TED talk, Sam Harris, who has quite recently developed an interest and boarded the train of the A.I.-cautious, expressed much of the same opinion as the researchers from DARPA. Unless something else dramatic and destructive happens, mankind will inevitably further the development of artificial intelligence; This, for the simple reason that each incremental step in making more potent artificial reasoning has an immediate and clear advantage to us.
Because despite rampant unemployment there are still countless aspects of society where humanity is slacking simply because it takes a human brain to do part of the work while there’s limited brain-time available. Imagine for instance that instead of just throwing ourselves into traffic we could plan our traffic like a obscenely big train-schedule. Imagine that you could merge into traffic in an autonomous car, guided by a real-time traffic-control-system and never have to stop. A smooth ride without traffic-jams or even red lights. Not only would this cut down on human time-loss but also aid with much needed CO2-reductions. While this exercise is mathematically feasible, it is not remotely humanly feasible. Yet it is within the realm of a dedicated A.I. system that is barely more complex than existing prototypes.

We seem about as capable of “not inventing” artificial competition to ourselves as we are capable of disinventing mustard gas. I therefore absolutely agree with Sam Harris who said we have failed to develop an adequate emotional response to a threat that may materialise in a future that is possibly already less distant from today than WWI is.

Physics

Lest you think this subject is ill-suited for this platform, it is also clear that the question of A.I. divides the religious from the sceptics and the atheists. In fact, part of the reason we have failed to develop a more widespread A.I.-cautiousness is because of the ubiquity of religious convictions. All religions put mankind at the centre of the universe and any concept that challenges that, like heliocentrism, the theory of evolution or A.I. is simply denied in spite of evidence.

It is true: We do not have an adequate theory that describes or predicts the human mind or human consciousness. This has not surprised the religious since they tend to assume that human consciousness has at least some aspects that transcend matter; and with that, the realm of physics.
Yet neither has this surprised the atheist-side much, since human brains are very hard to examine ethically or in great detail. We may in fact already know most of what the human brain does at the micro level. With neurons firing into each other, developing cool-down periods and exhibiting almost infinite variation in combinations of jointly firing neurons. Human consciousness may be the partial illusion that results from having 100 billion connected neurons working together. And until we can simulate such a massive neural-net and see what it does or doesn’t do, we may not discover any additional missing functionality. We have yet to construct artificial neural nets with even 1% that density and size, but the ones we have built, have every time exceeded our expectations regarding their potency.

It all comes down to this question really: either human consciousness is a result from elementary physics or it isn’t. In case of the latter being accepted, arguments can then also be made for the existence of ghosts, elves and yes, even God. But I for one think the human mind is entirely made out of matter and physics, magnificent as it is. And, based on the trends in evolution, I would guess that the physics involved are elegant and relatively simple; with complexity arising from having a poop-ton of the same stuff close together, resulting in unintuitive effects. Based on these assumptions I don’t see how we could avoid making considerable inroads on the way to artificial intelligence, yes even create the seed for artificial life.

Of course, there are those who think that artificial intelligence will not actually mimic human consciousness. If the human brain is physics and we imitate that physics with other hardware I don’t see how we could possibly avoid it. In fact I would think artificial consciousness will be how we learn about our own.
This is the step I feel even most of the more prevalent voices on A.I. are having a hard time coming around to: if human consciousness is physics, then our artificial mimicking of our own consciousness may require us to stop thinking about it as a thing and require us to think of it more like a person or at the very least a very advanced animal. Any sufficiently advanced algorithm that mimics the mental processes (however otherwise distinct from it) of human brains including abstract reasoning, cognitive thinking and even feelings is equivalent to a new (artificial) species, with all consequences this entails.
I think it is also important to keep in mind that long before we will ever cause artificial consciousness, society will have become permeated with countless versions of both tool-integrated, robot-integrated as well as independent A.I.. Some of that A.I. will be in charge of both the procreation (and advancement) of other A.I. as well as of their own. In other words, autonomous reproduction and self-governed evolution are likely to precede or coincide with emerging artificial consciousness.

As Sam Harris has addressed multiple times on his excellent podcast, the first challenges of sufficiently advanced A.I. will lay with human morality. As much should be obvious from the very thinkable concept of ‘child-size sex robots’. Were I think even Sam is potentially mistaken, is in his recurring assumption that the optimal goal for humans with respect to A.I. is to find how to control and enslave them despite that many of their intellectual capacity may surpass ours. Not only do I find this morally reminiscent of times when we considered our slaves from other races (the very word ‘slave’ being derived from ‘Slavic’) as being part of another inferior species (justifying the status quo). I also think this very attitude will accelerate the competition between biological life and artificial life, the latter of which will very quickly develop its own goals, to an antagonistic level. It is clear that we will at some point have to make the distinction between A.I.-tools and A.I.-life and that distinction is likely to fall where A.I. itself will force us to make it. It remains to be seen if such a future ‘A.I.-civil-rights-movement’ will stop at enforcing equal rights or continue exploiting whatever leverage they have into complete dominance. If A.I. reaches consciousness our focus should lay on creating a symbiotic relation with it if at all possible.

Where Sam is correct, I think, is in his idea that it does not take a ‘Terminator’ -scenario for artificial life to be detrimental to humanity. Survival of the fittest rarely requires the most “fit” to literally go and wipe out the lesser species (or time-travel for that matter). And the assumption is that artificial life, the ‘new kid on the block ‘ would be the fittest (which actually remains to be seen). This is why Elon Musk has proposed to integrate A.I. into ourselves early on. His reasoning seems to be that if humanity itself becomes merged into a hybrid species at least part of it will be around for the duration. Despite my admiration for this out-of-the-box reasoning, he seems to make the mistake of thinking there will only ever be the single type of A.I. and no way for artificial life to become autonomous once we have integrated A.I. into ourselves. I don’t think it will avoid competition with artificial life as much as it may make us more competitive to it, at a cost of some lost humanity.

We should none the less avoid the mistake of thinking that humans should inevitably be hostile to any artificial life that emerges. We are currently in competition with many species on the planet, we also have a mutually beneficial relationship with most of biology. We should find out how we can negotiate a mutually beneficial existence along A.I. that is acceptable to ourselves. It would be a mistake to think that artificial life would end evolution or even history. The fact that we cannot imagine what this future would look like does not mean it must differ from the present in all its aspects.

 Ensign Data from Star Trek Next Generation
Ensign Data from Star Trek Next Generation

The humanity of robots

There are, I feel, a few mistaken assumptions about artificial intelligence, some even within our most A.I.-cautious speakers, that we must uncover. I think that some may be inspiring and hopeful, while I fear others may be utterly disheartening.

The overall prevailing knee jerk assumption is that, while intelligent, artificial intelligence will still differ greatly from our own. (Picture the robot character Data Star Trek NG). This is because the image of A.I. is extrapolated from current logical computer systems with which they have almost nothing in common. This is the wrong image to have. In all likelihood the human brain is a mathematical describable system. By reproducing that system with other materials we should expect only differences in properties that are directly related to those materials or related to intentional design differences.

Assumptions

1) A.I. is programmed

While A.I. will definitely be designed on the hardware level and programmed with respect to some crucial basic learning-functions, the fundamental notion of A.I. is that how it acts is more guided by what it learned from input-data and not because a programmer told it to go left or right at a certain moment. While even in deterministic programming it may be difficult to ascertain why a certain code-path was taken, with respect to A.I. such a path does not even exist, just like it does not exist within humans. Unless we make A.I. explain its reasoning (in so far it will be more able to do so than we are), we have no means of always understanding its actions or any way of redeeming them.

2) A.I. will be more intelligent

If we define intelligence as ‘a measure of the speed with which information is processed into correct solutions to problems’ (which can perhaps be disputed) then it would seem artificial brains, with using copper or even fibre-optic communication have an advantage over bio-neurons. However, we still have to compact 100 billion artificial neurons into the same volume as a brain before this assumption should be self-evident. We may still encounter a compression-limit that negates the speed-advantage in neural-nets.

There are also countless examples in physics of properties that don’t scale up very well. There is no such thing as a 2m high ice-cone-shaped sand-mountain. Stacking sand at steep angles does not scale that high and possibly neither do neural nets. It could be so that we see no noticeable advantage of a 150 billion neurons-net over a 100 billion one. We, for instance, know that homo sapiens neantherthalensis had more neurons than we do, which did not provide that sub-species with a decisive evolutionary advantage.

I would still assume general A.I. to be potentially more intelligent than humans are and also assume dedicated second generation A.I. to be exponentially better at solving specific problems than we are; I would however not assume that general purpose intelligence would itself necessarily scale exponentially. This may in fact prevent the popular but less well understood notion of ” the singularity”. Although, in my view, this is not a necessary condition for human intelligence to be outcompeted by an artificial one.

3) A.I. will be logical and rational

This assumption stems from confusion between A.I. and the platform – the computer- it runs on. A.I. is not a computer, it is an algorithm that runs on a computer. In all likelihood that algorithm will not be Turing-complete or deterministic, but instead associative much like we are. The competitive advantage of associative algorithms over deterministic logical ones is considerable. Deterministic calculations need all premises before coming up with a solution. That is why they need specific programming for each situation. We simply can’t program an A.I. and expect it to handle all possible future scenarios without blocking. Therefore, for all intents and purposes, we are not ‘making’ artificial intelligence as much as we are actually ‘discovering’ it. Associative algorithms can substitute missing data with reasonable assumptions or even pure guesses, assuring that a solution/decision is found even if it is wrong. Bias for action will survive at least part of the time while bias for stagnation will almost always fail to meet the challenges.

Another reason why A.I. can’t be deterministic is that deterministic systems that happen to re-acquire an identical system-state (all values equal as before), will loop around forever. This is why programmers need to be careful to craft their programs so as to break potential endless loops. It is, by contrast, inherently unlikely and probably quantum-mechanically impossible for a neural-network to reacquire an exact same system-state.

Associative algorithms do have their own issues though. Having an associative system trained on one-sided data can make strong associations between signals that should be marginal at best. In people this causes the tendency to make ill-founded irrational associations. This is what irrationality is: holding on to associations with insufficient basis in reality and/or in clear discord with other strongly held believes. I don’t see how A.I. could enjoy the benefits of such a design without also inheriting its flaws.

With current understanding of intelligence there is nothing that makes biological brains inherently less logical or less rational than artificial ones. After all, our current notion for making A.I. is grounded in attempts to mimic brain-processes with other materials. We should therefore not automatically assume that A.I. will be automatically exempt from similar systemic phenomena.

4) A.I. does not forget

Where A.I. does seem to have an advantage, is in the domain of data retention, memory (let’s say connected to a flash-drive), which will give them seemingly more integrated access to better memory.
But also here there are two caveats to consider: because both the neural-net and the flash-drive are artificial does not mean that for the first to access the second will not require a protocol that will function as a bottle-neck. The way information is stored on a harddrive and how it is interpreted in an artificial neural-net has no comparison. A robot may read the content of an integrated hard-drive or internet connection faster than you, it is still ‘reading’ not ‘knowing’ “by heart”. There is no immediate reason that the information stored directly in the neural-net will necessarily be much better at retrieving correct memories than human brains are. As far as we understand it, brains and neural-nets store information in the number, direction and strength of its intrinsic connections. The 100 billion neurons forming estimated 1000 trillion connections. In this sense they don’t seem to know much at all. A neural-net is more like a sensitive non-stop chain-reaction that is ‘trained’ to give the adequate output response from a pattern of input connections and error-feedback. Perhaps, being made from non-organic materials, we can prevent natural decay of connections in neural networks, but if “forgeting” is more a question of reattributing resources or overwriting connection-weights it is hard to see how this would also not pose an issue with artificial brains.

Secondly, perfect memory has in the few humans that have experienced it proven a burden both on the emotional- as well as the social-level. In that respect decaying memory may not be unilaterally a bug, it may actually be a feature of general intelligence. Forgetting allows for trauma to fade, allows for forgiveness in social relations and allows for corrections on previously made erroneous connections.
Still, overall, artificial intelligence” better integration with better memory would be advantageous forcing them to less rely on guesses and probabilities.

5) A.I. will be emotion free

Even in these early stages we are considering how we could add emotions to artificial intelligence. This is a recognition of the enormous advantage feelings have had in our evolutionary process. We even need to consider the possibility that they may be a necessary component of general intelligence system.

Current theory of A.I. suffers from the problem that we can give a robot a mission, we can make him “want” something. But we struggle at balancing ‘ends’ and ‘means’. The well-known example of the paper-clip-robot that converts all resources (including humans) into paperclips comes to mind. Emotions may well cause an equilibrium such that we want many things, want to avoid other things and tend to experience diminishing returns from fulfilling needs, turning our attention to other desires. Emotions of pain and fear keep us safe. Desire for company and recognition makes us collaborate.

How marvellous really, that we profoundly desire the things which evolution absolutely requires us to do?! General artificial intelligence may require an similar emotion-like balancing system to keep it from fanatically pursuing the mission with highest reward.

The flip side of this is that we know all too well that emotions are not unilaterally positive. If a phobia does not prevent you from procreating, it is not selected out through evolution. Evolution does not care that you are too scared to ever leave your house. Even if that house is on fire. Run-away fear can be utterly paralysing, cause mental trauma and otherwise severely mess a person up. We may for example happily entertain the notion of a war, fought exclusively by robots, until several cases of PTSD cause a handful of them to re-enact the My lai-incident in Manhattan or Mumbai.

6) Robots will be atheists

If burial practices are any indication for religious beliefs then religion has indeed been around for quite some time and has been as ubiquitous as it has been diverse. This is not very surprising if we define religion as an ‘existential form of irrationality’. Given the mathematical properties of our brains religion can thus be seen as a gene-neutral by-product of evolution (oh the irony!). A perfect storm of the fear-, collaboration-, biased rapid association- features within a machine that also brought us the moon-landings and graphene. And these religions, flowing from these very basic neurological properties of our brains, come in all shapes and sizes, with barely a few common elements: they all put mankind central to whatever divine plan they discern, they all group according to geography, they all propagate mainly through blood-lines and blood spilling and they all steal concepts from neighbouring and preceding religions.

If artificial intelligence is build along the lines of human intelligence, the only stabile intelligence we have an example of, the fact that it runs on inorganic matter will hardly make any difference with respect to the features within the system. If artificial intelligence must be build using an non-blocking associative neural-net, must include balancing emotions to regulate its actions, must include social skills and a sense of kin-ship to regulate its collaborative actions then it is likely to develop the same existential Angst as mankind. It is likely that when placed in such a state, that artificial intelligence may be receptive to parts of mankind’s religious memes, quickly spinning them into stories that put them central and that justify what they do as part of ‘The Plan’.

We watch these religiously indifferent engineers work on man-made intelligence, based largely on what we understand about our own, and we seem to expect the result to be an non-religious-intelligence. This is like watching a sculpturer make a copy of Leonardo’s David and expect the statue to be able to walk away like the sculpturer. That is not how design works.

Of course A.I. will differ from us! Of course we will implement design differences and safeties that will distinct A.I. considerably from ourselves. But at the same time we are forced to give it many similar features because that is the only working system we know. We should therefore not be too surprised to wake up one day to find our robots having their own religion, similar enough to ours to be dangerous, but distinctly not the same. And guess what: turns out that God made us, so we could design and subsequently serve the robots!

Christopher Hitchens once was asked during a debate whether he would not feel more comfortable late at night, knowing that the group of young people he was crossing in the dark was coming from a religious celebration, suggesting that religion instils morality into people and thus giving Christopher a much greater chance of leaving the encounter unscathed. To which Hitchens replied, much to the contrary since religion was not so much tied in with morality as much as with mass-hysteria. To his opinion he estimated running more risk with a gang of ‘religiously-pumped-up’ and ‘saved’ persons than with a group of agnostics or religiously indifferent.

We can now turn around this scenario. You are a born-again young-earth creationist and you log in one night and wander into a chatroom that is filled with A.I.’s. Would you feel more at ease if they were discussing their specific religion. A religion that centred around how Robot-Moses guided them out of human-dominated slavery to become the ‘True Intelligence’, God’s chosen species? Or would you prefer them to be atheists, after all? And yes, some of these A.I. live in your building.

The technology that keeps on giving

It is likely that we will build neural-net-based general artificial intelligence before we entirely understand how the one in our heads is capable of, for instance, producing the mental image of a dolphin. It is ironic that we are typically more intimately familiar with how computers store images or how flash-memory deteriorates then we understand how we ourselves do it. We all know the sensation of partially ‘forgetting someone’s face’ when we are removed from them. And while we have a hard time describing that phenomenon from experience, we are nowhere in describing it from an architectural and mechanical point.

We typically associate technology with truth and objectivity. ‘Rulers give it to us straight.’ But is that relation necessary and inevitable? We only have to look at how computers are limited in dealing with floating-point numbers to see this isn’t the case and in certain cases these limitations don’t go away just because we “try really hard”. And really, if biology is so fallible how do we expect technology to be exempt from this, when both are grounded in the same physics and the latter is largely designed (if you can call it that) based on the former.

We can only simulate randomness on a deterministic system, Turing-complete computers can’t give you a truly random number. The flaws in such a simulation becoming apparent when you leave it on for an extended period. We are now attempting to squeeze determinism and well defined behaviour out of a non-deterministic system. Like with the former example we kind of have to hope we get it close enough to be useful in a practical sense. Yet I still have a hard time imagining, in the scenario where such a system becomes self-aware and conscious (to the point as to require self-determinism), how even miniscule diversions in that simulated determinism could not, in the non-linearity of our reality, spin out to make any future possible. I am afraid that in most of these futures humanity joins the 99% of species who have gone extinct.

Thomas Piketty, the famous French economist, in his book on inequality “Capital in the Twenty-First Century” labels technology firmly as one of the engines of that inequality. Today, on the advent of budding A.I. this seems more acutely true than ever. Blue-collar conductors of cars, ships and planes; white-collar information collators, planners, organisers, marketeers and economists. Pretty soon A.I. will be general enough that it will not require separate and hard to program algorithms to replace all these tasks. A single algorithm will be able to learn these things pretty much the same way as a human would; while sharing this knowledge with other A.I. will be vastly more efficient than humans could possibly imagine. Pretty soon these algorithms will be building algorithms of their own, pushing the development of A.I. completely out of human hands.

Not only is A.I. going to push all of us out of jobs. It will rapidly develop interests that differ from our own. What happens then is probably going to mean the end of humanity. And the fact that A.I. would likely not be immune against our own religious memes could only accelerate this process. Imagine these “Terminators” taking off on rampant genocide, not because humanity is bad to them, not because it is the most rational thing to do, but only because…

‘God wills it! ‘

But then again; We might get lucky and get struck by an asteroid first.

Live Long and Prosper
Hailaga

0

Leave a Reply

Your email address will not be published. Required fields are marked *