Cognitive Determinism and Free Will in the Age of Big Data
After thousands of years spent in the church of transcribed knowledge (oral and later written traditions) we are now, in the Anthropocene, witnessing the rise of instantaneous, all-encompassing data processing, and with it a revolution in intelligence. The memory of the brain has evolved little since the beginning of mankind, yet humans have never ceased playing the sorcerer’s apprentice, evolving at high speed from a species of hunter-gatherers to one of engineers. This postindustrial biotechnological phase has, however, surreptitiously, and for the first time in the history of mankind, effected radical changes in human behaviour. Indeed, biochips, NBICs and powerful GAFA1-exploited algorithms currently permeate every key sector of society (finance, healthcare, transportation, services, military), and are surpassing (or replacing) humans in an increasing number of tasks and positions, while the proponents of transhumanism simultaneously posit the post-biological future.2
Is this to say that most human ambitions or trades are going to become obsolete due to the increasing sophistication of machines, or that Artificial Intelligences (AIs) are bound to rule the planet? Or that DNA data storage, neural networks, and AI will eventually replace the encyclopaedia of human memory that has taken millennia to develop?3 In other words, could Dataism—having either insidiously or overtly established itself through the Internet of Things and computer-assisted tasks—have unforeseeable consequences to the fate of mankind were it to escape our control? How, then, may we retain control? Which aspects should we reject, and which should we invite into our inescapable relationship to technical objects? Coevolution and the digital era in full swing? Digital humanities? The fight would be a futile one, particularly if we consider, like Ganascia4, the myth of technological singularity as a decoy promoted by our cybersocieties, decidedly set on exploiting Big Data for mercantile purposes, even if it means crediting AI with powers it does not have.
Beyond the vain opposition between Man and the Simondonian technical object,5 there still remains the matter of undeniable mutations occurring within our relationship to the now digitalized world, and the meaning of the memory of Life6 whose specificities (structural invariables, consciousness, intention, memory, creativity) and plasticity (our infinite capacity for imagination and interculturality) remain a priori out of the reach of machines.7
The Dataflow Paradigm
If there is one fact we can agree upon, it is that we have fully entered a new standard, characterized by dataflows and digital turbulences, which impacts everything from the stock market to the biosphere but struggles to identify the ethical implications of the transformations it has initiated. This unprecedented situation has placed garnered (bio-evolutionistic) memory and algorithmic memory face to face, bringing about speculation surrounding the finiteness and prosthetization—or cyborgization—of the human species but, more importantly, sparking new attitudes towards Dataism and the augmented or enhanced memory systems evolving all around us—in other words, our new genome. Whereas, although triggering a certain degree of fear and fascination, cyborgs and the postbiological world have prompted relatively little mobilization, while the Dataist movement—which views the universe as a constant flow of data, filled with meaning-full streams, waiting to be decrypted—is irreversibly underway.
Historian Yuval Noah Harari assimilates Dataism to the religion of data and to the scientific revolution of the 21st century,8 a revolution made up of consecutive digital disruptions and the mass production of biochemical algorithms which can be analyzed in terms of the probability of action, chances of survival, and level of information. Controlled by the Tech Giants, the age of Big Data has already had tangible effects on the populations and deciders of the world, the former being essentially passive vectors or addicted users and the latter duly informed commanders. Both, however, remain heavily dependent upon and even manipulated by the masters of AI and digital sciences—who, since the rise of cybernetics, have been developing increasingly autonomous softwares and machines while remaining virtually impervious to the potential risks of unchecked robotics or intelligence. Thus the question arises in relation to all computer-assisted signal processing, from our daily use of e-readers or iPhones to androids, drones, or military strategies based on autonomous lethal weapons.
The age of Big Data also raises concerns regarding our now connected brains and how they compose an expanded form of cognition in which external, remote organs could potentially become uncontrollable, creating virtual object-worlds (the Internet being the earlier manifestation, the quantum computer being the future one) which could stray from their initial purpose and pose a threat to humanity. This is where safeguards (deep learning, non-monotonic and deontic logics)9 based on including human ethics in all decision-making processes come in. Another example of this is Asimov’s initial three laws of robotics,10 now expanded to five by Andra Keay, head of Silicon Valley Robotics11 and the Human Brain Project.12 Red flags have equally been raised by a number of philosophers, ethicists, and sociologists, calling for resistance in the face of proliferating smart objects and this new, cryp-to-commodifiable, potentially viral dataflow paradigm.
The Given Fact Versus Data
The human being appears, therefore, to remain essential, but for how long? In order to answer this question, we must challenge the alleged opposition between the given fact (historical, biological, human, and cerebral memory in particular) and data (artificial, algorithmic, or augmented memory), or more broadly, between technological singularity and the singularity of the living, the central question being how to manage the expansionist logic we have irreversibly thrown ourselves into—a dataflow paradigm marking the societal disruption of nature versus techno- or cyber-culture, with cybernetics initially meaning the science of government, before being redefined into Norbert Wiener’s science of systems, and then later into the fundamental model in AI and cognitive sciences research and development.
It is therefore important that we remain clear-sighted when facing any decisionmaking or benchmarking processes. A recent example can be found in the United States, where a fully autonomous AI system has been given clearance to perform automated screenings and diagnoses for diabetic retinopathy. This kind of result perfectly illustrates situations in which elaborate algorithmic systems13 serve as diagnostic tools supporting the treatment prescribed by the physician, but it encloses the question of the added human value in the final decision.14 As established regarding the process of writing and, more broadly, the creative act,15 added human value is very much linked to our cerebral plasticity, which can only adapt itself to the flow of digital streams to the extent that the brain, much like a sponge, is constantly absorbing and being traversed by these streams, while still always keeping space available—much like a default mode in computational terms—for its own creative or artistic activity.16 Therein resides the difference between computing power and the human brain, capable of storing and acquiring new forms of learning (cognitive plasticity, fluid intelligence) and above all, of managing uncertainty and practicing inventiveness while always preserving its memory-identity.
In a world where deterministic data processing is no longer limited to scientific laboratories but has effectively permeated all behavioural aspects of life, this overview aims for a pragmatic approach. The Villani report17 has highlighted AI’s essential role in the transformations of labour, communications, health, and even transportation through innovations such as the self-driving car and its potential variants. Now is no longer the time to minimize the impact of AI, but rather to harness its biosemantic value—in other words, its language—in order to espouse the movement with trust in our natural intelligence. Certain Dataists have nevertheless posited that humans will one day become overwhelmed by the massive influx of data and, without even realizing it, will find themselves entrusting more and more of their decision-making to machines. As a result, the increase in data volume and circulation, and the generalized use of processors, data sets, and AI will become the de facto instruments of an inevitable technohumanistic evolution over which we would no longer have any control, since all areas of sociopolitical life (economy, labour, climate, communications) would be dependent upon and impacted by each other.
There are, however, analysts who argue that these upheavals would not necessarily entail any changes in our value systems. Indeed, for although these transformations present radical modifications in our way of life, a human individual will only personally attribute value to data that makes sense to him or her; while data streams indiscriminately flood our servers through just-in-time data processing, we humans pick and choose what is meaningful to us and only then proceed to unfurl it through social networks and the mammoth entity that is the Internet.18
Between Technological Singularity and Biological Singularity
What may we expect from a global data system governed by intelligent algorithmic programs designed to offer us THE supreme form of well-being, and to which we would hand over more and more power?19 Granted, the premise seems to verge on posthumanist fiction, but hypothetical though it may be, the possibility is not lost on world leaders. And though cyborgs still fuel the somewhat fantastical aspect of a mutant humanoid cyberculture, they remain quintessential examples of the augmented, siliconed, transhumanized, immortalized body, and the symbols of an inevitable biotechnological evolution which, some researchers believe, could be capable of compensating for our degenerative or deleterious natural evolutions,20 to the point of even creating a wear-free (Human Brain Project 2024) or entirely synthetic (Blue Brain Project) digital brain.21
This domino-effect disruption runs parallel to contemporary environmental and collapsological scenarios in the sense that the asymptotic curve relating to techno-scientific and capitalistic progress unequivocally evolves in tandem with the erosion of the relationship between humans and nature, and its dire repercussions on global warming and biodiversity. Along with several other colleagues, writer Alain Damasio22 has associated this situation to a civilizational (and financial) crisis rather than a mere ecological process that nature would be capable of overcoming.
On the opposite side of the spectrum, we have a brain building the world’s reality from within, one that cannot function without a body or senses. There is also the impact of epigenetics and the post-genomic era on the development of our living organism, and effects on evolution which we are now more clearly assessing. And because of cerebral plasticity, this
impact includes the prevalence of intertwined cultures which could easily imply as with the acquisition of language and the invention of writing—the digitalization of thought. The future of mankind resides at the confluence of these two realities whether we like it or not, and it would be a mistake to underestimate its hybridity. Indeed, robotics and AI currently foretell machines capable of self-programming, self-transforming, and strongly interacting with their surroundings—in other words, artificial beings emulating the bios, cognitive architecture, and epigenesis included! We must therefore root intelligence within a body which would be spared of the alienation of a machine without refuting it.
A sensitive—and therefore fallible, creative, and metamorphic—mind-body is made of flesh and blood, as opposed to a compact, predictable, thought-decrypting AI. A body is an integral part of a mind whose stable imbalance, like its fragility and previousness, is essential to the world it inhabits. And yet, presently we are seeing a body of highly symbolic, computational existence, which is slowly severing itself from its earthly roots and which, mirroring scientific ecology, thinks in terms of statistics and probability, forsaking that which binds us to nature and the beings that make it up.23
Nevertheless, brain simulations, artificial neural networks, and synaptic microchips are still no match for the human brain which, as we’ve demonstrated, internalizes sensations and cultures in real time, using its intelligence for purposes other than simple problem-solving. This is, more broadly, the case for all living things, as has been suggested by contemporary research on our relationship to the living world and the intelligence of plants.24 Vegetal otherness leads us to challenge the seemingly axiomatic notions of sensitivity, intelligence, and cognition as well as the nature-culture binary. The challenge of the 21st century therefore resides in this opposition between technological and biological singularity which Miguel Benasayag25 has defined (through the Mamotreto model), outlining the hybridization of technical objects and living organisms which will necessarily demand a new frame of mind that accounts for the irreducible nature of the living.
In conclusion, the scenario defended in this article is that, rather than accepting an anti-progress posture fueled by the myth of Frankenstein, let us accompany the movement by honestly asking ourselves the following question: Where does the play of cognitive determinism end, where does free thought begin, and how do we transpose it to the age of Big Data?26 In any case, as recently stated by Edgar Morin, the answers should indicate that an augmented human does not imply a better human,27 and for that matter, it seems vain to oppose a biological given to any piece of algorithmic data. Let us rather observe their objective differences (experiential memory versus artificial memory) and the points where they converge and potentially hybridize, without disembodying unique human thinking.28 And let us wager that, even if cyborgs may very well be the first genomic or cybernetic human variants to appear, they will erase neither the animal within us nor the meaning in our garnered memory—which is to say, our historicity.
The original version of this revised and extended text was initially published in the online magazine Turbulences #3 (Oct-Nov. 2018, Symbolon Consulting Ed.).
Neuroscientist, poet and essayist, Debono developped a new concept of plasticity on the epistemological level. He founded the group of plasticians in 1994 and then the association Plasticités sciences arts in 2000, which is interested in the relationship between science, the arts and the humanities. Since 2005, he has directed the Transdisciplinary Journal of Human Plasticity PLASTIR.