Discussion „Could artificial agents develop consciousness, and if so, how to deal with this?“< Back
We are kindly inviting you to a lecture-discussion by prof. Walter Senn, University of Bern, Switzerland; EBRAINS/Human Brain Project: Could artificial agents develop consciousness, and if so, how to deal with this?
The Human Brain Project fosters discussions about ethical consequences of our own research. In this context we got involved in a debate whether consciousness in artificial intelligence may become possible. The debate was recently nurtured by the claim that the Google “Language Model for Dialogue Application” (LaMDA) with its intelligent and self-reflecting responses would already show some degree of feelings or consciousness. Although there is a consensus among most scientists that LaMDA is “just a language program”, opinions about whether agents cannot have consciousness-like states differ.
We highlight arguments from a computational neuroscience perspective why a strict denial of artificial consciousness is not compelling. Based on neuroscientific insights, we suggest an extended Turing test for consciousness that includes neuronal circuits for mental decision making, feelings and pain. Assuming some degree of artificial consciousness is possible, we wonder how much of a restriction to “beneficial AI” is itself ethical. We suggest a “human deal” that keeps a human primacy while still appropriately balancing rights and benefits for both parties.