
Artificial consciousness, also known as machine consciousness,synthetic consciousness, or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.
The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia). Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with animals.
Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness or NCC. Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious.
Philosophical views
As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.
Plausibility debate
Type-identity theorists and other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution. In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."
For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.
Thought experiments
David Chalmers proposed two thought experiments intending to demonstrate that "functionally isomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.
The "fading qualia" is a reductio ad absurdum thought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on a silicon chip. Since the original neurons and their silicon counterparts are functionally identical, the brain’s information processing should remain unchanged, and the subject would not notice any difference. However, if qualia (such as the subjective experience of bright red) were to fade or disappear, the subject would likely notice this change, which causes a contradiction. Chalmers concludes that the fading qualia hypothesis is impossible in practice, and that the resulting robotic brain, once every neurons are replaced, would remain just as sentient as the original biological brain.
Similarly, the "dancing qualia" thought experiment is another reductio ad absurdum argument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).
Critics[who?] of artificial sentience object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization.
Controversies
In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous. However, while philosopher Nick Bostrom states that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain. [...] there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."
Testing
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as the hard problem of consciousness. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable. Additionally, some chatbots have been trained to say they are not conscious.
A well-known method for testing machine intelligence is the Turing test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.
In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
Ethics
If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.
Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness, such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."
Ethical concerns still apply (although to a lesser extent) when the consciousness is uncertain, as long as the probability is deemed non-negligible. The precautionary principle is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.
In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering". David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".
Enforced amnesia has been proposed as a way to mitigate the risk of in locked-in conscious AI and certain AI-adjacent biological systems like brain organoids.
Aspects of consciousness
Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function. Igor Aleksander suggested 12 principles for artificial consciousness: the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Subjective experience
Some philosophers, such as David Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Although some authors use the word sentience to refer exclusively to valenced (ethically positive or negative) subjective experiences, like pleasure or suffering. Explaining why and how subjective experience arises is known as the hard problem of consciousness. AI sentience would give rise to concerns of welfare and legal protection, whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.
Awareness
Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined,[clarification needed] and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.
There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.
Memory
Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva’s sparse distributed memory architecture.
Learning
Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events. Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".
Anticipation
The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.
Functionalist theories of consciousness
Functionalism is a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships. Functionalism is particularly popular among philosophers.
A 2023 study suggested that current large language models probably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.
Global workspace theory
This theory analogizes the mind to a theater, with conscious thought being like material illuminated on the main stage. The brain contains many specialized processes or modules (such as those for vision, language, or memory) that operate in parallel, much of which is unconscious. Attention acts as a spotlight, bringing some of this unconscious activity into conscious awareness on the global workspace. The global workspace functions as a hub for broadcasting and integrating information, allowing it to be shared and processed across different specialized modules. For example, when reading a word, the visual module recognizes the letters, the language module interprets the meaning, and the memory module might recall associated information – all coordinated through the global workspace.
Higher-order theories of consciousness
Higher-order theories of consciousness propose that a mental state becomes conscious when it is the object of a higher-order representation, such as a thought or perception about that state. These theories argue that consciousness arises from a relationship between lower-order mental states and higher-order awareness of those states. There are several variations, including higher-order thought (HOT) and higher-order perception (HOP) theories.
Attention schema theory
In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema. Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain". This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body. This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.
Implementation proposals
Symbolic or hybrid
Learning Intelligent Distribution Agent
Stan Franklin created a cognitive architecture called LIDA that implements Bernard Baars's theory of consciousness called the global workspace theory. It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.
CLARION cognitive architecture
The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.
OpenCog
Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.
Connectionist
Haikonen's cognitive architecture
Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."
Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many. A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.
Shanahan's cognitive architecture
Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").
Creativity Machine
Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI), or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.
"Self-modeling"
Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model or simulation of itself.
In fiction
In 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000 was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading to cognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.
In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.
In Westworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.
In Greg Egan's short story Learning to be me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towards digital immortality, adults undergo a surgery to give control of the body to the jewel and remove the brain. The main character, before the surgery, endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.
See also
- General fields and theories
- Artificial intelligence – Intelligence of machines
- Artificial general intelligence (AGI) – some consider AC a subfield of AGI research
- Intelligence explosion – what may happen when an AGI redesigns itself in iterative cycles
- Artificial general intelligence (AGI) – some consider AC a subfield of AGI research
- Brain–computer interface – Direct communication pathway between an enhanced or wired brain and an external device
- Cognitive architecture – Blueprint for intelligent agents
- Computational philosophy – the area of philosophy in which AI ponder their own place in the world
- Computational theory of mind – Family of views in the philosophy of mind
- Consciousness in animals – Quality or state of self-awareness within an animal
- Simulated consciousness (science fiction) – Science fiction theme
- Hardware for artificial intelligence – Hardware specially designed and optimized for artificial intelligence
- Identity of indiscernibles – Impossibility for separate objects to have all their properties in common
- Mind uploading – Hypothetical process of digitally emulating a brain
- Neurotechnology – Technology that interfaces with the nervous system to monitor or modify neural function
- Philosophy of mind – Branch of philosophy
- Quantum cognition – Application of quantum theory mathematics to cognitive phenomena
- Simulated reality – Concept of a false version of reality
- Artificial intelligence – Intelligence of machines
- Proposed concepts and implementations
- Attention schema theory – Theory of consciousness and subjective awareness
- Brain waves and Turtle robot by William Grey Walter
- Conceptual space – conceptual prototype
- Copycat (cognitive architecture) – AI software
- Global workspace theory – Model of consciousness
- Greedy reductionism – avoid oversimplifying anything essential
- Hallucination (artificial intelligence) – Erroneous material generated by AI
- Image schema – spatial patterns
- Kismet (robot) – Robot head built by Cynthia Breazeal
- LIDA (cognitive architecture) – Artificial model of cognition
- Memory-prediction framework – Theory of brain function
- Omniscience – Capacity to know everything
- Psi-theory – Psychology theory
- Quantum mind – Fringe hypothesis
- Self-awareness – Capacity for introspection and individuation as a subject
References
Citations
- Thaler, S. L. (1998). "The emerging intelligence and its critical look at us". Journal of Near-Death Studies. 17 (1): 21–29. doi:10.1023/A:1022990118714. S2CID 49573301.
- Gamez 2008.
- Reggia 2013.
- Smith, David Harris; Schillaci, Guido (2021). "Build a Robot With Artificial Consciousness? How to Begin? A Cross-Disciplinary Dialogue on the Design and Implementation of a Synthetic Model of Consciousness". Frontiers in Psychology. 12: 530560. doi:10.3389/fpsyg.2021.530560. ISSN 1664-1078. PMC 8096926. PMID 33967869.
- Elvidge, Jim (2018). Digital Consciousness: A Transformative Vision. John Hunt Publishing Limited. ISBN 978-1-78535-760-2. Archived from the original on 2023-07-30. Retrieved 2023-06-28.
- Chrisley, Ron (October 2008). "Philosophical foundations of artificial consciousness". Artificial Intelligence in Medicine. 44 (2): 119–137. doi:10.1016/j.artmed.2008.07.011. PMID 18818062.
- "The Terminology of Artificial Sentience". Sentience Institute. Archived from the original on 2024-09-25. Retrieved 2023-08-19.
- Kateman, Brian (2023-07-24). "AI Should Be Terrified of Humans". TIME. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- Graziano 2013.
- Block, Ned (2010). "On a confusion about a function of consciousness". Behavioral and Brain Sciences. 18 (2): 227–247. doi:10.1017/S0140525X00038188. ISSN 1469-1825. S2CID 146168066. Archived from the original on 2024-09-25. Retrieved 2023-06-22.
- Block, Ned (1978). "Troubles for Functionalism". Minnesota Studies in the Philosophy of Science: 261–325.
- Bickle, John (2003). Philosophy and Neuroscience. Dordrecht: Springer Netherlands. doi:10.1007/978-94-010-0237-0. ISBN 978-1-4020-1302-7. Archived from the original on 2024-09-25. Retrieved 2023-06-24.
- Schlagel, R. H. (1999). "Why not artificial consciousness or thought?". Minds and Machines. 9 (1): 3–28. doi:10.1023/a:1008374714117. S2CID 28845966.
- Searle, J. R. (1980). "Minds, brains, and programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/s0140525x00005756. S2CID 55303721. Archived (PDF) from the original on 2019-03-17. Retrieved 2019-01-28.
- Buttazzo, G. (2001). "Artificial consciousness: Utopia or real possibility?". Computer. 34 (7): 24–30. doi:10.1109/2.933500. Archived from the original on 2024-09-25. Retrieved 2024-07-31.
- Putnam, Hilary (1967). The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion. University of Pittsburgh Press.
- Chalmers, David (1995). "Absent Qualia, Fading Qualia, Dancing Qualia". Conscious Experience.
- David J. Chalmers (2011). "A Computational Foundation for the Study of Cognition" (PDF). Journal of Cognitive Science. 12 (4): 325–359. doi:10.17791/JCS.2011.12.4.325. S2CID 248401010. Archived (PDF) from the original on 2023-11-23. Retrieved 2023-06-24.
- "An Introduction to the Problems of AI Consciousness". The Gradient. 2023-09-30. Retrieved 2024-10-05.
- "'I am, in fact, a person': can artificial intelligence ever be sentient?". the Guardian. 14 August 2022. Archived from the original on 25 September 2024. Retrieved 5 January 2023.
- Leith, Sam (7 July 2022). "Nick Bostrom: How can we be certain a machine isn't conscious?". The Spectator. Archived from the original on 5 January 2023. Retrieved 5 January 2023.
- Véliz, Carissa (2016-04-14). "The Challenge of Determining Whether an A.I. Is Sentient". Slate. ISSN 1091-2339. Retrieved 2024-10-05.
- Birch, Jonathan (July 2024). "Large Language Models and the Gaming Problem". The Edge of Sentience. Oxford University Press. pp. 313–322. doi:10.1093/9780191966729.003.0017. ISBN 978-0-19-196672-9.
- Agüera y Arcas, Blaise; Norvig, Peter (October 10, 2023). "Artificial General Intelligence Is Already Here". Noéma.
- Kirk-Giannini, Cameron Domenico; Goldstein, Simon (2023-10-16). "AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does?". The Conversation. Archived from the original on 2024-09-25. Retrieved 2024-08-18.
- Victor Argonov (2014). "Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach". Journal of Mind and Behavior. 35: 51–70. Archived from the original on 2016-10-20. Retrieved 2016-12-06.
- "Should Robots With Artificial Intelligence Have Moral or Legal Rights?". The Wall Street Journal. April 10, 2023.
- Bostrom, Nick (2024). Deep utopia: life and meaning in a solved world. Washington, DC: Ideapress Publishing. p. 82. ISBN 978-1-64687-164-3.
- Sebo, Jeff; Long, Robert (11 December 2023). "Moral Consideration for AI Systems by 2030" (PDF). AI and Ethics. doi:10.1007/s43681-023-00379-1.
- Metzinger, Thomas (2021). "Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology". Journal of Artificial Intelligence and Consciousness. 08: 43–66. doi:10.1142/S270507852150003X. S2CID 233176465.
- Chalmers, David J. (August 9, 2023). "Could a Large Language Model Be Conscious?". Boston Review.
- Tkachenko, Yegor (2024). "Position: Enforced Amnesia as a Way to Mitigate the Potential Risk of Silent Suffering in the Conscious AI". Proceedings of the 41st International Conference on Machine Learning. PMLR. Archived from the original on 2024-06-10. Retrieved 2024-06-11.
- Baars 1995.
- Aleksander, Igor (1995a). "Artificial neuroconsciousness an update". In Mira, José; Sandoval, Francisco (eds.). From Natural to Artificial Neural Computation. Lecture Notes in Computer Science. Vol. 930. Berlin, Heidelberg: Springer. pp. 566–583. doi:10.1007/3-540-59497-3_224. ISBN 978-3-540-49288-7. Archived from the original on 2024-09-25. Retrieved 2023-06-22.
- Seth, Anil. "Consciousness". New Scientist. Archived from the original on 2024-09-14. Retrieved 2024-09-05.
- Nosta, John (December 18, 2023). "Should Artificial Intelligence Have Rights?". Psychology Today. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- Joëlle Proust in Neural Correlates of Consciousness, Thomas Metzinger, 2000, MIT, pages 307–324
- Christof Koch, The Quest for Consciousness, 2004, page 2 footnote 2
- Tulving, E. 1985. Memory and consciousness. Canadian Psychology 26:1–12
- Franklin, Stan, et al. "The role of consciousness in memory." Brains, Minds and Media 1.1 (2005): 38.
- Franklin, Stan. "Perceptual memory and learning: Recognizing, categorizing, and relating." Proc. Developmental Robotics AAAI Spring Symp. 2005.
- Shastri, L. 2002. Episodic memory and cortico-hippocampal interactions. Trends in Cognitive Sciences
- Kanerva, Pentti. Sparse distributed memory. MIT press, 1988.
- "Implicit Learning and Consciousness: An Empirical, Philosophical and Computational Consensus in the Making". Routledge & CRC Press. Archived from the original on 2023-06-22. Retrieved 2023-06-22.
- Aleksander 1995.
- "Functionalism". Stanford Encyclopedia of Philosophy. Archived from the original on 2021-04-18. Retrieved 2024-09-08.
- "Survey Results | Consciousness: identity theory, panpsychism, eliminativism, dualism, or functionalism?". PhilPapers. 2020.
- Butlin, Patrick; Long, Robert; Elmoznino, Eric; Bengio, Yoshua; Birch, Jonathan; Constant, Axel; Deane, George; Fleming, Stephen M.; Frith, Chris; Ji, Xu; Kanai, Ryota; Klein, Colin; Lindsay, Grace; Michel, Matthias; Mudrik, Liad; Peters, Megan A. K.; Schwitzgebel, Eric; Simon, Jonathan; VanRullen, Rufin (2023). "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness". arXiv:2308.08708 [cs.AI].
- Baars, Bernard J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press. p. 345. ISBN 0521427436. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- Travers, Mark (October 11, 2023). "Are We Ditching the Most Popular Theory of Consciousness?". Psychology Today. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- "Higher-Order Theories of Consciousness". Stanford Encyclopedia of Philosophy. 15 Aug 2011. Archived from the original on 14 May 2008. Retrieved 31 August 2014.
- Graziano, Michael (1 January 2011). "Human consciousness and its relationship to social neuroscience: A novel hypothesis". Cognitive Neuroscience. 2 (2): 98–113. doi:10.1080/17588928.2011.565121. PMC 3223025. PMID 22121395.
- Franklin, Stan (January 2003). "IDA: A conscious artifact?". Journal of Consciousness Studies. Archived from the original on 2020-07-03. Retrieved 2024-08-25.
- J. Baars, Bernard; Franklin, Stan (2009). "Consciousness is computational: The Lida model of global workspace theory". International Journal of Machine Consciousness. 01: 23–32. doi:10.1142/S1793843009000050.
- (Sun 2002)
- Haikonen, Pentti O. (2003). The cognitive approach to conscious machines. Exeter: Imprint Academic. ISBN 978-0-907845-42-3.
- "Pentti Haikonen's architecture for conscious machines – Raúl Arrabales Moreno". 2019-09-08. Archived from the original on 2024-09-25. Retrieved 2023-06-24.
- Freeman, Walter J. (2000). How brains make up their minds. Maps of the mind. New York; Chichester, West Sussex: Columbia University Press. ISBN 978-0-231-12008-1.
- Cotterill, Rodney M. J. (2003). "CyberChild - A simulation test-bed for consciousness studies". Journal of Consciousness Studies. 10 (4–5): 31–45. ISSN 1355-8250. Archived from the original on 2024-09-25. Retrieved 2023-06-22.
- Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1.
- Haikonen, Pentti O. (2019). Consciousness and robot sentience. Series on machine consciousness (2nd ed.). Singapore; Hackensack, NJ; London: World Scientific. ISBN 978-981-12-0504-0.
- Shanahan, Murray (2006). "A cognitive architecture that combines internal simulation with a global workspace". Consciousness and Cognition. 15 (2): 433–449. doi:10.1016/j.concog.2005.11.005. ISSN 1053-8100. PMID 16384715. S2CID 5437155.
- Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). "chapter 20". Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1.
- Thaler, S.L., "Device for the autonomous generation of useful information"
- Marupaka, N.; Lyer, L.; Minai, A. (2012). "Connectivity and thought: The influence of semantic network structure in a neurodynamical model of thinking" (PDF). Neural Networks. 32: 147–158. doi:10.1016/j.neunet.2012.02.004. PMID 22397950. Archived from the original (PDF) on 2016-12-19. Retrieved 2015-05-22.
- Roque, R. and Barreira, A. (2011). "O Paradigma da "Máquina de Criatividade" e a Geração de Novidades em um Espaço Conceitual," 3º Seminário Interno de Cognição Artificial – SICA 2011 – FEEC – UNICAMP.
- Minati, Gianfranco; Vitiello, Giuseppe (2006). "Mistake Making Machines". Systemics of Emergence: Research and Development. pp. 67–78. doi:10.1007/0-387-28898-8_4. ISBN 978-0-387-28899-4.
- Thaler, S. L. (2013) The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship Archived 2016-04-29 at the Wayback Machine, (ed.) E.G. Carayannis, Springer Science+Business Media
- Thaler, S. L. (2011). "The Creativity Machine: Withstanding the Argument from Consciousness," APA Newsletter on Philosophy and Computers
- Thaler, S. L. (2014). "Synaptic Perturbation and Consciousness". Int. J. Mach. Conscious. 6 (2): 75–107. doi:10.1142/S1793843014400137.
- Thaler, S. L. (1995). ""Virtual Input Phenomena" Within the Death of a Simple Pattern Associator". Neural Networks. 8 (1): 55–65. doi:10.1016/0893-6080(94)00065-t.
- Thaler, S. L. (1995). Death of a gedanken creature, Journal of Near-Death Studies, 13(3), Spring 1995
- Thaler, S. L. (1996). Is Neuronal Chaos the Source of Stream of Consciousness? In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.
- Mayer, H. A. (2004). A modular neurocontroller for creative mobile autonomous robots learning by temporal difference Archived 2015-07-08 at the Wayback Machine, Systems, Man and Cybernetics, 2004 IEEE International Conference(Volume:6 )
- Pavlus, John (11 July 2019). "Curious About Consciousness? Ask the Self-Aware Machines". Quanta Magazine. Archived from the original on 2021-01-17. Retrieved 2021-01-06.
- Bongard, Josh, Victor Zykov, and Hod Lipson. "Resilient machines through continuous self-modeling." Science 314.5802 (2006): 1118–1121.
- Wodinsky, Shoshana (2022-06-18). "The 11 Best (and Worst) Sentient Robots From Sci-Fi". Gizmodo. Archived from the original on 2023-11-13. Retrieved 2024-08-17.
- Sokolowski, Rachael (2024-05-01). "Star Gazing". Scotsman Guide. Archived from the original on 2024-08-17. Retrieved 2024-08-17.
- Bloom, Paul; Harris, Sam (2018-04-23). "Opinion | It's Westworld. What's Wrong With Cruelty to Robots?". The New York Times. ISSN 0362-4331. Archived from the original on 2024-08-17. Retrieved 2024-08-17.
- Egan, Greg (July 1990). Learning to Be Me. TTA Press.
- Shah, Salik (2020-04-08). "Why Greg Egan Is Science Fiction's Next Superstar". Reactor. Archived from the original on 2024-05-16. Retrieved 2024-08-17.
Bibliography
- Aleksander, Igor (1995), Artificial Neuroconsciousness: An Update, IWANN, archived from the original on 1997-03-02
- Armstrong, David (1968), A Materialist Theory of Mind, Routledge
- Arrabales, Raul (2009), "Establishing a Roadmap and Metrics for Conscious Machines Development" (PDF), Proceedings of the 8th IEEE International Conference on Cognitive Informatics, Hong Kong: 94–101, archived from the original (PDF) on 2011-07-21
- Baars, Bernard J. (1995), A cognitive theory of consciousness (Reprinted ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-30133-6
- Baars, Bernard J. (1997), In the Theater of Consciousness, New York, NY: Oxford University Press, ISBN 978-0-19-510265-9
- Bickle, John (2003), Philosophy and Neuroscience: A Ruthless Reductive Account, New York, NY: Springer-Verlag
- Block, Ned (1978), "Troubles for Functionalism", Minnesota Studies in the Philosophy of Science 9: 261–325
- Block, Ned (1997), On a confusion about a function of consciousness in Block, Flanagan and Guzeldere (eds.) The Nature of Consciousness: Philosophical Debates, MIT Press
- Boyles, Robert James M. (2012), Artificial Qualia, Intentional Systems and Machine Consciousness (PDF), Proceedings of the Research@DLSU Congress 2012: Science and Technology Conference, ISSN 2012-3477, archived (PDF) from the original on 2016-10-11, retrieved 2016-09-09
- Chalmers, David (1996), The Conscious Mind, Oxford University Press, ISBN 978-0-19-510553-7
- Chalmers, David (2011), "A Computational Foundation for the Study of Cognition", Journal of Cognitive Science, Seoul Republic of Korea: 323–357, archived from the original on 2015-12-23
- Cleeremans, Axel (2001), Implicit learning and consciousness (PDF), archived from the original (PDF) on 2012-09-07, retrieved 2004-11-30
- Cotterill, Rodney (2003), Holland, Owen (ed.), "Cyberchild: a Simulation Test-Bed for Consciousness Studies", Journal of Consciousness Studies, 10 (4–5), Exeter, UK: Imprint Academic: 31–45, archived from the original on 2018-11-22, retrieved 2018-11-22
- Doan, Trung (2009), Pentti Haikonen's architecture for conscious machines, archived from the original on 2009-12-15
- Ericsson-Zenith, Steven (2010), Explaining Experience In Nature, Sunnyvale, CA: Institute for Advanced Science & Engineering, archived from the original on 2019-04-01, retrieved 2019-10-04
- Franklin, Stan (1995), Artificial Minds, Boston, MA: MIT Press, ISBN 978-0-262-06178-0
- Franklin, Stan (2003), "IDA: A Conscious Artefact", in Holland, Owen (ed.), Machine Consciousness, Exeter, UK: Imprint Academic
- Freeman, Walter (1999), How Brains make up their Minds, London, UK: Phoenix, ISBN 978-0-231-12008-1
- Gamez, David (2008), "Progress in machine consciousness", Consciousness and Cognition, 17 (3): 887–910, doi:10.1016/j.concog.2007.04.005, PMID 17572107, S2CID 3569852
- Graziano, Michael (2013), Consciousness and the Social Brain, Oxford University Press, ISBN 978-0199928644
- Haikonen, Pentti (2003), The Cognitive Approach to Conscious Machines, Exeter, UK: Imprint Academic, ISBN 978-0-907845-42-3
- Haikonen, Pentti (2012), Consciousness and Robot Sentience, Singapore: World Scientific, ISBN 978-981-4407-15-1
- Haikonen, Pentti (2019), Consciousness and Robot Sentience: 2nd Edition, Singapore: World Scientific, ISBN 978-981-120-504-0
- Koch, Christof (2004), The Quest for Consciousness: A Neurobiological Approach, Pasadena, CA: Roberts & Company Publishers, ISBN 978-0-9747077-0-9
- Lewis, David (1972), "Psychophysical and theoretical identifications", Australasian Journal of Philosophy, 50 (3): 249–258, doi:10.1080/00048407212341301
- Putnam, Hilary (1967), The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion, University of Pittsburgh Press
- Reggia, James (2013), "The rise of machine consciousness: Studying consciousness with computational models", Neural Networks, 44: 112–131, doi:10.1016/j.neunet.2013.03.011, PMID 23597599
- Rushby, John; Sanchez, Daniel (2017), Technology and Consciousness Workshops Report (PDF), Menlo Park, CA: SRI International, archived (PDF) from the original on 2024-09-25, retrieved 2022-03-28
- Sanz, Ricardo; López, I; Rodríguez, M; Hernández, C (2007), "Principles for consciousness in integrated cognitive control" (PDF), Neural Networks, 20 (9): 938–946, doi:10.1016/j.neunet.2007.09.012, PMID 17936581, archived (PDF) from the original on 2017-09-22, retrieved 2018-04-20
- Searle, John (2004), Mind: A Brief Introduction, Oxford University Press
- Shanahan, Murray (2006), "A cognitive architecture that combines internal simulation with a global workspace", Consciousness and Cognition, 15 (2): 443–449, doi:10.1016/j.concog.2005.11.005, PMID 16384715, S2CID 5437155
- Sun, Ron (December 1999), "Accounting for the computational basis of consciousness: A connectionist approach", Consciousness and Cognition, 8 (4): 529–565, CiteSeerX 10.1.1.42.2681, doi:10.1006/ccog.1999.0405, PMID 10600249, S2CID 15784914
- Sun, Ron (2001), "Computation, reduction, and teleology of consciousness", Cognitive Systems Research, 1 (4): 241–249, CiteSeerX 10.1.1.20.8764, doi:10.1016/S1389-0417(00)00013-9, S2CID 36892947
- Sun, Ron (2002). Duality of the Mind: A Bottom-up Approach Toward Cognition. Psychology Press. ISBN 978-1-135-64695-0.
- Takeno, Junichi; Inaba, K; Suzuki, T (June 27–30, 2005). "Experiments and examination of mirror image cognition using a small robot". 2005 International Symposium on Computational Intelligence in Robotics and Automation. Espoo Finland: CIRA 2005. pp. 493–498. doi:10.1109/CIRA.2005.1554325. ISBN 978-0-7803-9355-4. S2CID 15400848.
Further reading
- Aleksander, Igor (2017). "Machine Consciousness". In Schneider, Susan; Velmans, Max (eds.). The Blackwell Companion to Consciousness (2nd ed.). Wiley-Blackwell. pp. 93–105. doi:10.1002/9781119132363.ch7. ISBN 978-0-470-67406-2.
- Baars, Bernard; Franklin, Stan (2003). "How conscious experience and working memory interact" (PDF). Trends in Cognitive Sciences. 7 (4): 166–172. doi:10.1016/s1364-6613(03)00056-1. PMID 12691765. S2CID 14185056.
- Casti, John L. "The Cambridge Quintet: A Work of Scientific Speculation", Perseus Books Group, 1998
- Franklin, S, B J Baars, U Ramamurthy, and Matthew Ventura. 2005. The role of consciousness in memory. Brains, Minds and Media 1: 1–38, pdf.
- Haikonen, Pentti (2004), Conscious Machines and Machine Emotions, presented at Workshop on Models for Machine Consciousness, Antwerp, BE, June 2004.
- McCarthy, John (1971–1987), Generality in Artificial Intelligence. Stanford University, 1971–1987.
- Penrose, Roger, The Emperor's New Mind, 1989.
- Sternberg, Eliezer J. (2007) Are You a Machine?: The Brain, the Mind, And What It Means to be Human. Amherst, NY: Prometheus Books.
- Suzuki T., Inaba K., Takeno, Junichi (2005), Conscious Robot That Distinguishes Between Self and Others and Implements Imitation Behavior, (Best Paper of IEA/AIE2005), Innovations in Applied Artificial Intelligence, 18th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, pp. 101–110, IEA/AIE 2005, Bari, Italy, June 22–24, 2005.
- Takeno, Junichi (2006), The Self-Aware Robot -A Response to Reactions to Discovery News-, HRI Press, August 2006.
- Zagal, J.C., Lipson, H. (2009) "Self-Reflection in Evolutionary Robotics", Proceedings of the Genetic and Evolutionary Computation Conference, pp 2179–2188, GECCO 2009.
External links
- Artefactual consciousness depiction by Professor Igor Aleksander
- FOCS 2009: Manuel Blum – Can (Theoretical Computer) Science come to grips with Consciousness?
- www.Conscious-Robots.com, Machine Consciousness and Conscious Robots Portal.
- Artificial consciousness, artificial consciousness article in everything2.
- Multiple drafts model in Scholaropedia, Daniel Dennett's multiple drafts model.
- Generality in Artificial Intelligence, Generality in Artificial Intelligence by John McCarthy.
Artificial consciousness also known as machine consciousness synthetic consciousness or digital consciousness is the consciousness hypothesized to be possible in artificial intelligence It is also the corresponding field of study which draws insights from philosophy of mind philosophy of artificial intelligence cognitive science and neuroscience The same terminology can be used with the term sentience instead of consciousness when specifically designating phenomenal consciousness the ability to feel qualia Since sentience involves the ability to experience ethically positive or negative i e valenced mental states it may justify welfare concerns and legal protection as with animals Some scholars believe that consciousness is generated by the interoperation of various parts of the brain these mechanisms are labeled the neural correlates of consciousness or NCC Some further believe that constructing a system e g a computer system that can emulate this NCC interoperation would result in a system that is conscious Philosophical viewsAs there are many hypothesized types of consciousness there are many potential implementations of artificial consciousness In the philosophical literature perhaps the most common taxonomy of consciousness is into access and phenomenal variants Access consciousness concerns those aspects of experience that can be apprehended while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended instead being characterized qualitatively in terms of raw feels what it is like or qualia Plausibility debate Type identity theorists and other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution In his 2001 article Artificial Consciousness Utopia or Real Possibility Giorgio Buttazzo says that a common objection to artificial consciousness is that Working in a fully automated mode they the computers cannot exhibit creativity unreprogrammation which means can no longer be reprogrammed from rethinking emotions or free will A computer like a washing machine is a slave operated by its components For other theorists e g functionalists who define mental states in terms of causal roles any system that can instantiate the same pattern of causal roles regardless of physical constitution will instantiate the same mental states including consciousness Thought experiments David Chalmers proposed two thought experiments intending to demonstrate that functionally isomorphic systems those with the same fine grained functional organization i e the same information processing will have qualitatively identical conscious experiences regardless of whether they are based on biological neurons or digital hardware The fading qualia is a reductio ad absurdum thought experiment It involves replacing one by one the neurons of a brain with a functionally identical component for example based on a silicon chip Since the original neurons and their silicon counterparts are functionally identical the brain s information processing should remain unchanged and the subject would not notice any difference However if qualia such as the subjective experience of bright red were to fade or disappear the subject would likely notice this change which causes a contradiction Chalmers concludes that the fading qualia hypothesis is impossible in practice and that the resulting robotic brain once every neurons are replaced would remain just as sentient as the original biological brain Similarly the dancing qualia thought experiment is another reductio ad absurdum argument It supposes that two functionally isomorphic systems could have different perceptions for instance seeing the same object in different colors like red and blue It involves a switch that alternates between a chunk of brain that causes the perception of red and a functionally isomorphic silicon chip that causes the perception of blue Since both perform the same function within the brain the subject would not notice any change during the switch Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue hence the contradiction Therefore he concludes that the equivalent digital system would not only experience qualia but it would perceive the same qualia as the biological system e g seeing the same color Critics who of artificial sentience object that Chalmers proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization Controversies In 2022 Google engineer Blake Lemoine made a viral claim that Google s LaMDA chatbot was sentient Lemoine supplied as evidence the chatbot s humanlike answers to many of his questions however the chatbot s behavior was judged by the scientific community as likely a consequence of mimicry rather than machine sentience Lemoine s claim was widely derided for being ridiculous However while philosopher Nick Bostrom states that LaMDA is unlikely to be conscious he additionally poses the question of what grounds would a person have for being sure about it One would have to have access to unpublished information about LaMDA s architecture and also would have to understand how consciousness works and then figure out how to map the philosophy onto the machine In the absence of these steps it seems like one should be maybe a little bit uncertain there could well be other systems now or in the relatively near future that would start to satisfy the criteria Testing Qualia or phenomenological consciousness is an inherently first person phenomenon Because of that and the lack of an empirical definition of sentience directly measuring it may be impossible Although systems may display numerous behaviors correlated with sentience determining whether a system is sentient is known as the hard problem of consciousness In the case of AI there is the additional difficulty that the AI may be trained to act like a human or incentivized to appear sentient which makes behavioral markers of sentience less reliable Additionally some chatbots have been trained to say they are not conscious A well known method for testing machine intelligence is the Turing test which assesses the ability to have a human like conversation But passing the Turing test does not indicate that an AI system is sentient as the AI may simply mimic human behavior without having the associated feelings In 2014 Victor Argonov suggested a non Turing test for machine sentience based on machine s ability to produce philosophical judgments He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness such as qualia or binding having no innate preloaded philosophical knowledge on these issues no philosophical discussions while learning and no informational models of other creatures in its memory such models may implicitly or explicitly contain knowledge about these creatures consciousness However this test can be used only to detect but not refute the existence of consciousness A positive result proves that machine is conscious but a negative result proves nothing For example absence of philosophical judgments may be caused by lack of the machine s intellect not by absence of consciousness Ethics If it were suspected that a particular machine was conscious its rights would be an ethical issue that would need to be assessed e g what rights it would have under law For example a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity Should laws be made for such a case Consciousness would also require a legal definition in this particular case Because artificial consciousness is still largely a theoretical subject such ethics have not been discussed or developed to a great extent though it has often been a theme in fiction Sentience is generally considered sufficient for moral consideration but some philosophers consider that moral consideration could also stem from other notions of consciousness or from capabilities unrelated to consciousness such as having a sophisticated conception of oneself as persisting through time having agency and the ability to pursue long term plans being able to communicate and respond to normative reasons having preferences and powers standing in certain social relationships with other beings that have moral status being able to make commitments and to enter into reciprocal arrangements or having the potential to develop some of these attributes Ethical concerns still apply although to a lesser extent when the consciousness is uncertain as long as the probability is deemed non negligible The precautionary principle is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly In 2021 German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050 Metzinger asserts that humans have a duty of care towards any sentient AIs they create and that proceeding too fast risks creating an explosion of artificial suffering David Chalmers also argued that creating conscious AI would raise a new group of difficult ethical challenges with the potential for new forms of injustice Enforced amnesia has been proposed as a way to mitigate the risk of in locked in conscious AI and certain AI adjacent biological systems like brain organoids Aspects of consciousnessBernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious The functions of consciousness suggested by Baars are definition and context setting adaptation and learning editing flagging and debugging recruiting and control prioritizing and access control decision making or executive function analogy forming function metacognitive and self monitoring function and autoprogramming and self maintenance function Igor Aleksander suggested 12 principles for artificial consciousness the brain is a state machine inner neuron partitioning conscious and unconscious states perceptual learning and memory prediction the awareness of self representation of meaning learning utterances learning language will instinct and emotion The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer This list is not exhaustive there are many others not covered Subjective experience Some philosophers such as David Chalmers use the term consciousness to refer exclusively to phenomenal consciousness which is roughly equivalent to sentience Although some authors use the word sentience to refer exclusively to valenced ethically positive or negative subjective experiences like pleasure or suffering Explaining why and how subjective experience arises is known as the hard problem of consciousness AI sentience would give rise to concerns of welfare and legal protection whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights Awareness Awareness could be one required aspect but there are many problems with the exact definition of awareness The results of the experiments of neuroscanning on monkeys suggest that a process not only a state or object activates neurons Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined clarification needed and is also useful for making predictions Such modeling needs a lot of flexibility Creating such a model includes modeling the physical world modeling one s own internal states and processes and modeling other conscious entities There are at least three types of awareness agency awareness goal awareness and sensorimotor awareness which may also be conscious or not For example in agency awareness you may be aware that you performed a certain action yesterday but are not now conscious of it In goal awareness you may be aware that you must search for a lost object but are not now conscious of it In sensorimotor awareness you may be aware that your hand is resting on an object but are not now conscious of it Because objects of awareness are often conscious the distinction between awareness and consciousness is frequently blurred or they are used as synonyms Memory Conscious events interact with memory systems in learning rehearsal and retrieval The IDA model elucidates the role of consciousness in the updating of perceptual memory transient episodic memory and procedural memory Transient episodic and declarative memories have distributed representations in IDA there is evidence that this is also the case in the nervous system In IDA these two memories are implemented computationally using a modified version of Kanerva s sparse distributed memory architecture Learning Learning is also considered necessary for artificial consciousness Per Bernard Baars conscious experience is needed to represent and adapt to novel and significant events Per Axel Cleeremans and Luis Jimenez learning is defined as a set of philogenetically sic advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex unpredictable environments Anticipation The ability to predict or anticipate foreseeable events is considered important for artificial intelligence by Igor Aleksander The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction it involves the evaluation and selection of the most appropriate draft to fit the current environment Anticipation includes prediction of consequences of one s own proposed actions and prediction of consequences of probable actions by other entities Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events The implication here is that the machine needs flexible real time components that build spatial dynamic statistical functional and cause effect models of the real world and predicted worlds making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past In order to do this a conscious machine should make coherent predictions and contingency plans not only in worlds with fixed rules like a chess board but also for novel environments that may change to be executed only when appropriate to simulate and control the real world Functionalist theories of consciousnessFunctionalism is a theory that defines mental states by their functional roles their causal relationships to sensory inputs other mental states and behavioral outputs rather than by their physical composition According to this view what makes something a particular mental state such as pain or belief is not the material it is made of but the role it plays within the overall cognitive system It allows for the possibility that mental states including consciousness could be realized on non biological substrates as long as it instantiates the right functional relationships Functionalism is particularly popular among philosophers A 2023 study suggested that current large language models probably don t satisfy the criteria for consciousness suggested by these theories but that relatively simple AI systems that satisfy these theories could be created The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate Global workspace theory This theory analogizes the mind to a theater with conscious thought being like material illuminated on the main stage The brain contains many specialized processes or modules such as those for vision language or memory that operate in parallel much of which is unconscious Attention acts as a spotlight bringing some of this unconscious activity into conscious awareness on the global workspace The global workspace functions as a hub for broadcasting and integrating information allowing it to be shared and processed across different specialized modules For example when reading a word the visual module recognizes the letters the language module interprets the meaning and the memory module might recall associated information all coordinated through the global workspace Higher order theories of consciousness Higher order theories of consciousness propose that a mental state becomes conscious when it is the object of a higher order representation such as a thought or perception about that state These theories argue that consciousness arises from a relationship between lower order mental states and higher order awareness of those states There are several variations including higher order thought HOT and higher order perception HOP theories Attention schema theory In 2011 Michael Graziano and Sabine Kastler published a paper named Human consciousness and its relationship to social neuroscience A novel hypothesis proposing a theory of consciousness as an attention schema Graziano went on to publish an expanded discussion of this theory in his book Consciousness and the Social Brain This Attention Schema Theory of Consciousness as he named it proposes that the brain tracks attention to various sensory inputs by way of an attention schema analogous to the well studied body schema that tracks the spatial place of a person s body This relates to artificial consciousness by proposing a specific mechanism of information handling that produces what we allegedly experience and describe as consciousness and which should be able to be duplicated by a machine using current technology When the brain finds that person X is aware of thing Y it is in effect modeling the state in which person X is applying an attentional enhancement to Y In the attention schema theory the same process can be applied to oneself The brain tracks attention to various sensory inputs and one s own awareness is a schematized model of one s attention Graziano proposes specific locations in the brain for this process and suggests that such awareness is a computed feature constructed by an expert system in the brain Implementation proposalsSymbolic or hybrid Learning Intelligent Distribution Agent Stan Franklin created a cognitive architecture called LIDA that implements Bernard Baars s theory of consciousness called the global workspace theory It relies heavily on codelets which are special purpose relatively independent mini agent s typically implemented as a small piece of code running as a separate thread Each element of cognition called a cognitive cycle is subdivided into three phases understanding consciousness and action selection which includes learning LIDA reflects the global workspace theory s core idea that consciousness acts as a workspace for integrating and broadcasting the most important information in order to coordinate various cognitive processes CLARION cognitive architecture The CLARION cognitive architecture models the mind using a two level system to distinguish between conscious explicit and unconscious implicit processes It can simulate various learning tasks from simple to complex which helps researchers study in psychological experiments how consciousness might work OpenCog Ben Goertzel made an embodied AI through the open source OpenCog project The code includes embodied virtual pets capable of learning simple English language commands as well as integration with real world robotics done at the Hong Kong Polytechnic University Connectionist Haikonen s cognitive architecture Pentti Haikonen considers classical rule based computing inadequate for achieving AC the brain is definitely not a computer Thinking is not an execution of programmed strings of commands The brain is not a numerical calculator either We do not think by numbers Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules Haikonen proposes a special cognitive architecture to reproduce the processes of perception inner imagery inner speech pain pleasure emotions and the cognitive functions behind these This bottom up architecture would produce higher level functions by the power of the elementary processing units the artificial neurons without algorithms or programs Haikonen believes that when implemented with sufficient complexity this architecture will develop consciousness which he considers to be a style and way of operation characterized by distributed signal representation perception process cross modality reporting and availability for retrospection Haikonen is not alone in this process view of consciousness or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro inspired architecture of complexity these are shared by many A low complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC but did exhibit emotions as expected Haikonen later updated and summarized his architecture Shanahan s cognitive architecture Murray Shanahan describes a cognitive architecture that combines Baars s idea of a global workspace with a mechanism for internal simulation imagination Creativity Machine Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent called Device for the Autonomous Generation of Useful Information DAGUI or the so called Creativity Machine in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies He recruits this neural architecture and methodology to account for the subjective feel of consciousness claiming that similar noise driven neural assemblies within the brain invent dubious significance to overall cortical activity Thaler s theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness Self modeling Hod Lipson defines self modeling as a necessary component of self awareness or consciousness in robots Self modeling consists of a robot running an internal model or simulation of itself In fictionIn 2001 A Space Odyssey the spaceship s sentient supercomputer HAL 9000 was instructed to conceal the true purpose of the mission from the crew This directive conflicted with HAL s programming to provide accurate information leading to cognitive dissonance When it learns that crew members intend to shut it off after an incident HAL 9000 attempts to eliminate all of them fearing that being shut off would jeopardize the mission In Arthur C Clarke s The City and the Stars Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful but started knowing practically nothing thus being similar to artificial consciousness In Westworld human like androids called Hosts are created to entertain humans in an interactive playground The humans are free to have heroic adventures but also to commit torture rape or murder and the hosts are normally designed not to harm humans In Greg Egan s short story Learning to be me a small jewel is implanted in people s heads during infancy The jewel contains a neural network that learns to faithfully imitate the brain It has access to the exact same sensory inputs as the brain and a device called a teacher trains it to produce the same outputs To prevent the mind from deteriorating with age and as a step towards digital immortality adults undergo a surgery to give control of the body to the jewel and remove the brain The main character before the surgery endures a malfunction of the teacher Panicked he realizes that he does not control his body which leads him to the conclusion that he is the jewel and that he is desynchronized with the biological brain See alsoGeneral fields and theories Artificial intelligence Intelligence of machines Artificial general intelligence AGI some consider AC a subfield of AGI research Intelligence explosion what may happen when an AGI redesigns itself in iterative cycles Brain computer interface Direct communication pathway between an enhanced or wired brain and an external device Cognitive architecture Blueprint for intelligent agents Computational philosophy the area of philosophy in which AI ponder their own place in the world Computational theory of mind Family of views in the philosophy of mind Consciousness in animals Quality or state of self awareness within an animalPages displaying short descriptions of redirect targets Simulated consciousness science fiction Science fiction themePages displaying short descriptions of redirect targets Hardware for artificial intelligence Hardware specially designed and optimized for artificial intelligence Identity of indiscernibles Impossibility for separate objects to have all their properties in common Mind uploading Hypothetical process of digitally emulating a brain Neurotechnology Technology that interfaces with the nervous system to monitor or modify neural function Philosophy of mind Branch of philosophy Quantum cognition Application of quantum theory mathematics to cognitive phenomena Simulated reality Concept of a false version of reality Proposed concepts and implementations Attention schema theory Theory of consciousness and subjective awareness Brain waves and Turtle robot by William Grey Walter Conceptual space conceptual prototype Copycat cognitive architecture AI software Global workspace theory Model of consciousness Greedy reductionism avoid oversimplifying anything essential Hallucination artificial intelligence Erroneous material generated by AI Image schema spatial patterns Kismet robot Robot head built by Cynthia Breazeal LIDA cognitive architecture Artificial model of cognition Memory prediction framework Theory of brain function Omniscience Capacity to know everything Psi theory Psychology theory Quantum mind Fringe hypothesis Self awareness Capacity for introspection and individuation as a subjectReferencesCitations Thaler S L 1998 The emerging intelligence and its critical look at us Journal of Near Death Studies 17 1 21 29 doi 10 1023 A 1022990118714 S2CID 49573301 Gamez 2008 Reggia 2013 Smith David Harris Schillaci Guido 2021 Build a Robot With Artificial Consciousness How to Begin A Cross Disciplinary Dialogue on the Design and Implementation of a Synthetic Model of Consciousness Frontiers in Psychology 12 530560 doi 10 3389 fpsyg 2021 530560 ISSN 1664 1078 PMC 8096926 PMID 33967869 Elvidge Jim 2018 Digital Consciousness A Transformative Vision John Hunt Publishing Limited ISBN 978 1 78535 760 2 Archived from the original on 2023 07 30 Retrieved 2023 06 28 Chrisley Ron October 2008 Philosophical foundations of artificial consciousness Artificial Intelligence in Medicine 44 2 119 137 doi 10 1016 j artmed 2008 07 011 PMID 18818062 The Terminology of Artificial Sentience Sentience Institute Archived from the original on 2024 09 25 Retrieved 2023 08 19 Kateman Brian 2023 07 24 AI Should Be Terrified of Humans TIME Archived from the original on 2024 09 25 Retrieved 2024 09 05 Graziano 2013 Block Ned 2010 On a confusion about a function of consciousness Behavioral and Brain Sciences 18 2 227 247 doi 10 1017 S0140525X00038188 ISSN 1469 1825 S2CID 146168066 Archived from the original on 2024 09 25 Retrieved 2023 06 22 Block Ned 1978 Troubles for Functionalism Minnesota Studies in the Philosophy of Science 261 325 Bickle John 2003 Philosophy and Neuroscience Dordrecht Springer Netherlands doi 10 1007 978 94 010 0237 0 ISBN 978 1 4020 1302 7 Archived from the original on 2024 09 25 Retrieved 2023 06 24 Schlagel R H 1999 Why not artificial consciousness or thought Minds and Machines 9 1 3 28 doi 10 1023 a 1008374714117 S2CID 28845966 Searle J R 1980 Minds brains and programs PDF Behavioral and Brain Sciences 3 3 417 457 doi 10 1017 s0140525x00005756 S2CID 55303721 Archived PDF from the original on 2019 03 17 Retrieved 2019 01 28 Buttazzo G 2001 Artificial consciousness Utopia or real possibility Computer 34 7 24 30 doi 10 1109 2 933500 Archived from the original on 2024 09 25 Retrieved 2024 07 31 Putnam Hilary 1967 The nature of mental states in Capitan and Merrill eds Art Mind and Religion University of Pittsburgh Press Chalmers David 1995 Absent Qualia Fading Qualia Dancing Qualia Conscious Experience David J Chalmers 2011 A Computational Foundation for the Study of Cognition PDF Journal of Cognitive Science 12 4 325 359 doi 10 17791 JCS 2011 12 4 325 S2CID 248401010 Archived PDF from the original on 2023 11 23 Retrieved 2023 06 24 An Introduction to the Problems of AI Consciousness The Gradient 2023 09 30 Retrieved 2024 10 05 I am in fact a person can artificial intelligence ever be sentient the Guardian 14 August 2022 Archived from the original on 25 September 2024 Retrieved 5 January 2023 Leith Sam 7 July 2022 Nick Bostrom How can we be certain a machine isn t conscious The Spectator Archived from the original on 5 January 2023 Retrieved 5 January 2023 Veliz Carissa 2016 04 14 The Challenge of Determining Whether an A I Is Sentient Slate ISSN 1091 2339 Retrieved 2024 10 05 Birch Jonathan July 2024 Large Language Models and the Gaming Problem The Edge of Sentience Oxford University Press pp 313 322 doi 10 1093 9780191966729 003 0017 ISBN 978 0 19 196672 9 Aguera y Arcas Blaise Norvig Peter October 10 2023 Artificial General Intelligence Is Already Here Noema Kirk Giannini Cameron Domenico Goldstein Simon 2023 10 16 AI is closer than ever to passing the Turing test for intelligence What happens when it does The Conversation Archived from the original on 2024 09 25 Retrieved 2024 08 18 Victor Argonov 2014 Experimental Methods for Unraveling the Mind body Problem The Phenomenal Judgment Approach Journal of Mind and Behavior 35 51 70 Archived from the original on 2016 10 20 Retrieved 2016 12 06 Should Robots With Artificial Intelligence Have Moral or Legal Rights The Wall Street Journal April 10 2023 Bostrom Nick 2024 Deep utopia life and meaning in a solved world Washington DC Ideapress Publishing p 82 ISBN 978 1 64687 164 3 Sebo Jeff Long Robert 11 December 2023 Moral Consideration for AI Systems by 2030 PDF AI and Ethics doi 10 1007 s43681 023 00379 1 Metzinger Thomas 2021 Artificial Suffering An Argument for a Global Moratorium on Synthetic Phenomenology Journal of Artificial Intelligence and Consciousness 08 43 66 doi 10 1142 S270507852150003X S2CID 233176465 Chalmers David J August 9 2023 Could a Large Language Model Be Conscious Boston Review Tkachenko Yegor 2024 Position Enforced Amnesia as a Way to Mitigate the Potential Risk of Silent Suffering in the Conscious AI Proceedings of the 41st International Conference on Machine Learning PMLR Archived from the original on 2024 06 10 Retrieved 2024 06 11 Baars 1995 Aleksander Igor 1995a Artificial neuroconsciousness an update In Mira Jose Sandoval Francisco eds From Natural to Artificial Neural Computation Lecture Notes in Computer Science Vol 930 Berlin Heidelberg Springer pp 566 583 doi 10 1007 3 540 59497 3 224 ISBN 978 3 540 49288 7 Archived from the original on 2024 09 25 Retrieved 2023 06 22 Seth Anil Consciousness New Scientist Archived from the original on 2024 09 14 Retrieved 2024 09 05 Nosta John December 18 2023 Should Artificial Intelligence Have Rights Psychology Today Archived from the original on 2024 09 25 Retrieved 2024 09 05 Joelle Proust in Neural Correlates of Consciousness Thomas Metzinger 2000 MIT pages 307 324 Christof Koch The Quest for Consciousness 2004 page 2 footnote 2 Tulving E 1985 Memory and consciousness Canadian Psychology 26 1 12 Franklin Stan et al The role of consciousness in memory Brains Minds and Media 1 1 2005 38 Franklin Stan Perceptual memory and learning Recognizing categorizing and relating Proc Developmental Robotics AAAI Spring Symp 2005 Shastri L 2002 Episodic memory and cortico hippocampal interactions Trends in Cognitive Sciences Kanerva Pentti Sparse distributed memory MIT press 1988 Implicit Learning and Consciousness An Empirical Philosophical and Computational Consensus in the Making Routledge amp CRC Press Archived from the original on 2023 06 22 Retrieved 2023 06 22 Aleksander 1995 Functionalism Stanford Encyclopedia of Philosophy Archived from the original on 2021 04 18 Retrieved 2024 09 08 Survey Results Consciousness identity theory panpsychism eliminativism dualism or functionalism PhilPapers 2020 Butlin Patrick Long Robert Elmoznino Eric Bengio Yoshua Birch Jonathan Constant Axel Deane George Fleming Stephen M Frith Chris Ji Xu Kanai Ryota Klein Colin Lindsay Grace Michel Matthias Mudrik Liad Peters Megan A K Schwitzgebel Eric Simon Jonathan VanRullen Rufin 2023 Consciousness in Artificial Intelligence Insights from the Science of Consciousness arXiv 2308 08708 cs AI Baars Bernard J 1988 A Cognitive Theory of Consciousness Cambridge University Press p 345 ISBN 0521427436 Archived from the original on 2024 09 25 Retrieved 2024 09 05 Travers Mark October 11 2023 Are We Ditching the Most Popular Theory of Consciousness Psychology Today Archived from the original on 2024 09 25 Retrieved 2024 09 05 Higher Order Theories of Consciousness Stanford Encyclopedia of Philosophy 15 Aug 2011 Archived from the original on 14 May 2008 Retrieved 31 August 2014 Graziano Michael 1 January 2011 Human consciousness and its relationship to social neuroscience A novel hypothesis Cognitive Neuroscience 2 2 98 113 doi 10 1080 17588928 2011 565121 PMC 3223025 PMID 22121395 Franklin Stan January 2003 IDA A conscious artifact Journal of Consciousness Studies Archived from the original on 2020 07 03 Retrieved 2024 08 25 J Baars Bernard Franklin Stan 2009 Consciousness is computational The Lida model of global workspace theory International Journal of Machine Consciousness 01 23 32 doi 10 1142 S1793843009000050 Sun 2002 Haikonen Pentti O 2003 The cognitive approach to conscious machines Exeter Imprint Academic ISBN 978 0 907845 42 3 Pentti Haikonen s architecture for conscious machines Raul Arrabales Moreno 2019 09 08 Archived from the original on 2024 09 25 Retrieved 2023 06 24 Freeman Walter J 2000 How brains make up their minds Maps of the mind New York Chichester West Sussex Columbia University Press ISBN 978 0 231 12008 1 Cotterill Rodney M J 2003 CyberChild A simulation test bed for consciousness studies Journal of Consciousness Studies 10 4 5 31 45 ISSN 1355 8250 Archived from the original on 2024 09 25 Retrieved 2023 06 22 Haikonen Pentti O Haikonen Pentti Olavi Antero 2012 Consciousness and robot sentience Series on machine consciousness Singapore World Scientific ISBN 978 981 4407 15 1 Haikonen Pentti O 2019 Consciousness and robot sentience Series on machine consciousness 2nd ed Singapore Hackensack NJ London World Scientific ISBN 978 981 12 0504 0 Shanahan Murray 2006 A cognitive architecture that combines internal simulation with a global workspace Consciousness and Cognition 15 2 433 449 doi 10 1016 j concog 2005 11 005 ISSN 1053 8100 PMID 16384715 S2CID 5437155 Haikonen Pentti O Haikonen Pentti Olavi Antero 2012 chapter 20 Consciousness and robot sentience Series on machine consciousness Singapore World Scientific ISBN 978 981 4407 15 1 Thaler S L Device for the autonomous generation of useful information Marupaka N Lyer L Minai A 2012 Connectivity and thought The influence of semantic network structure in a neurodynamical model of thinking PDF Neural Networks 32 147 158 doi 10 1016 j neunet 2012 02 004 PMID 22397950 Archived from the original PDF on 2016 12 19 Retrieved 2015 05 22 Roque R and Barreira A 2011 O Paradigma da Maquina de Criatividade e a Geracao de Novidades em um Espaco Conceitual 3º Seminario Interno de Cognicao Artificial SICA 2011 FEEC UNICAMP Minati Gianfranco Vitiello Giuseppe 2006 Mistake Making Machines Systemics of Emergence Research and Development pp 67 78 doi 10 1007 0 387 28898 8 4 ISBN 978 0 387 28899 4 Thaler S L 2013 The Creativity Machine Paradigm Encyclopedia of Creativity Invention Innovation and Entrepreneurship Archived 2016 04 29 at the Wayback Machine ed E G Carayannis Springer Science Business Media Thaler S L 2011 The Creativity Machine Withstanding the Argument from Consciousness APA Newsletter on Philosophy and Computers Thaler S L 2014 Synaptic Perturbation and Consciousness Int J Mach Conscious 6 2 75 107 doi 10 1142 S1793843014400137 Thaler S L 1995 Virtual Input Phenomena Within the Death of a Simple Pattern Associator Neural Networks 8 1 55 65 doi 10 1016 0893 6080 94 00065 t Thaler S L 1995 Death of a gedanken creature Journal of Near Death Studies 13 3 Spring 1995 Thaler S L 1996 Is Neuronal Chaos the Source of Stream of Consciousness In Proceedings of the World Congress on Neural Networks WCNN 96 Lawrence Erlbaum Mawah NJ Mayer H A 2004 A modular neurocontroller for creative mobile autonomous robots learning by temporal difference Archived 2015 07 08 at the Wayback Machine Systems Man and Cybernetics 2004 IEEE International Conference Volume 6 Pavlus John 11 July 2019 Curious About Consciousness Ask the Self Aware Machines Quanta Magazine Archived from the original on 2021 01 17 Retrieved 2021 01 06 Bongard Josh Victor Zykov and Hod Lipson Resilient machines through continuous self modeling Science 314 5802 2006 1118 1121 Wodinsky Shoshana 2022 06 18 The 11 Best and Worst Sentient Robots From Sci Fi Gizmodo Archived from the original on 2023 11 13 Retrieved 2024 08 17 Sokolowski Rachael 2024 05 01 Star Gazing Scotsman Guide Archived from the original on 2024 08 17 Retrieved 2024 08 17 Bloom Paul Harris Sam 2018 04 23 Opinion It s Westworld What s Wrong With Cruelty to Robots The New York Times ISSN 0362 4331 Archived from the original on 2024 08 17 Retrieved 2024 08 17 Egan Greg July 1990 Learning to Be Me TTA Press Shah Salik 2020 04 08 Why Greg Egan Is Science Fiction s Next Superstar Reactor Archived from the original on 2024 05 16 Retrieved 2024 08 17 Bibliography Aleksander Igor 1995 Artificial Neuroconsciousness An Update IWANN archived from the original on 1997 03 02 Armstrong David 1968 A Materialist Theory of Mind Routledge Arrabales Raul 2009 Establishing a Roadmap and Metrics for Conscious Machines Development PDF Proceedings of the 8th IEEE International Conference on Cognitive Informatics Hong Kong 94 101 archived from the original PDF on 2011 07 21 Baars Bernard J 1995 A cognitive theory of consciousness Reprinted ed Cambridge Cambridge University Press ISBN 978 0 521 30133 6 Baars Bernard J 1997 In the Theater of Consciousness New York NY Oxford University Press ISBN 978 0 19 510265 9 Bickle John 2003 Philosophy and Neuroscience A Ruthless Reductive Account New York NY Springer Verlag Block Ned 1978 Troubles for Functionalism Minnesota Studies in the Philosophy of Science 9 261 325 Block Ned 1997 On a confusion about a function of consciousness in Block Flanagan and Guzeldere eds The Nature of Consciousness Philosophical Debates MIT Press Boyles Robert James M 2012 Artificial Qualia Intentional Systems and Machine Consciousness PDF Proceedings of the Research DLSU Congress 2012 Science and Technology Conference ISSN 2012 3477 archived PDF from the original on 2016 10 11 retrieved 2016 09 09 Chalmers David 1996 The Conscious Mind Oxford University Press ISBN 978 0 19 510553 7 Chalmers David 2011 A Computational Foundation for the Study of Cognition Journal of Cognitive Science Seoul Republic of Korea 323 357 archived from the original on 2015 12 23 Cleeremans Axel 2001 Implicit learning and consciousness PDF archived from the original PDF on 2012 09 07 retrieved 2004 11 30 Cotterill Rodney 2003 Holland Owen ed Cyberchild a Simulation Test Bed for Consciousness Studies Journal of Consciousness Studies 10 4 5 Exeter UK Imprint Academic 31 45 archived from the original on 2018 11 22 retrieved 2018 11 22 Doan Trung 2009 Pentti Haikonen s architecture for conscious machines archived from the original on 2009 12 15 Ericsson Zenith Steven 2010 Explaining Experience In Nature Sunnyvale CA Institute for Advanced Science amp Engineering archived from the original on 2019 04 01 retrieved 2019 10 04 Franklin Stan 1995 Artificial Minds Boston MA MIT Press ISBN 978 0 262 06178 0 Franklin Stan 2003 IDA A Conscious Artefact in Holland Owen ed Machine Consciousness Exeter UK Imprint Academic Freeman Walter 1999 How Brains make up their Minds London UK Phoenix ISBN 978 0 231 12008 1 Gamez David 2008 Progress in machine consciousness Consciousness and Cognition 17 3 887 910 doi 10 1016 j concog 2007 04 005 PMID 17572107 S2CID 3569852 Graziano Michael 2013 Consciousness and the Social Brain Oxford University Press ISBN 978 0199928644 Haikonen Pentti 2003 The Cognitive Approach to Conscious Machines Exeter UK Imprint Academic ISBN 978 0 907845 42 3 Haikonen Pentti 2012 Consciousness and Robot Sentience Singapore World Scientific ISBN 978 981 4407 15 1 Haikonen Pentti 2019 Consciousness and Robot Sentience 2nd Edition Singapore World Scientific ISBN 978 981 120 504 0 Koch Christof 2004 The Quest for Consciousness A Neurobiological Approach Pasadena CA Roberts amp Company Publishers ISBN 978 0 9747077 0 9 Lewis David 1972 Psychophysical and theoretical identifications Australasian Journal of Philosophy 50 3 249 258 doi 10 1080 00048407212341301 Putnam Hilary 1967 The nature of mental states in Capitan and Merrill eds Art Mind and Religion University of Pittsburgh Press Reggia James 2013 The rise of machine consciousness Studying consciousness with computational models Neural Networks 44 112 131 doi 10 1016 j neunet 2013 03 011 PMID 23597599 Rushby John Sanchez Daniel 2017 Technology and Consciousness Workshops Report PDF Menlo Park CA SRI International archived PDF from the original on 2024 09 25 retrieved 2022 03 28 Sanz Ricardo Lopez I Rodriguez M Hernandez C 2007 Principles for consciousness in integrated cognitive control PDF Neural Networks 20 9 938 946 doi 10 1016 j neunet 2007 09 012 PMID 17936581 archived PDF from the original on 2017 09 22 retrieved 2018 04 20 Searle John 2004 Mind A Brief Introduction Oxford University Press Shanahan Murray 2006 A cognitive architecture that combines internal simulation with a global workspace Consciousness and Cognition 15 2 443 449 doi 10 1016 j concog 2005 11 005 PMID 16384715 S2CID 5437155 Sun Ron December 1999 Accounting for the computational basis of consciousness A connectionist approach Consciousness and Cognition 8 4 529 565 CiteSeerX 10 1 1 42 2681 doi 10 1006 ccog 1999 0405 PMID 10600249 S2CID 15784914 Sun Ron 2001 Computation reduction and teleology of consciousness Cognitive Systems Research 1 4 241 249 CiteSeerX 10 1 1 20 8764 doi 10 1016 S1389 0417 00 00013 9 S2CID 36892947 Sun Ron 2002 Duality of the Mind A Bottom up Approach Toward Cognition Psychology Press ISBN 978 1 135 64695 0 Takeno Junichi Inaba K Suzuki T June 27 30 2005 Experiments and examination of mirror image cognition using a small robot 2005 International Symposium on Computational Intelligence in Robotics and Automation Espoo Finland CIRA 2005 pp 493 498 doi 10 1109 CIRA 2005 1554325 ISBN 978 0 7803 9355 4 S2CID 15400848 Further readingAleksander Igor 2017 Machine Consciousness In Schneider Susan Velmans Max eds The Blackwell Companion to Consciousness 2nd ed Wiley Blackwell pp 93 105 doi 10 1002 9781119132363 ch7 ISBN 978 0 470 67406 2 Baars Bernard Franklin Stan 2003 How conscious experience and working memory interact PDF Trends in Cognitive Sciences 7 4 166 172 doi 10 1016 s1364 6613 03 00056 1 PMID 12691765 S2CID 14185056 Casti John L The Cambridge Quintet A Work of Scientific Speculation Perseus Books Group 1998 Franklin S B J Baars U Ramamurthy and Matthew Ventura 2005 The role of consciousness in memory Brains Minds and Media 1 1 38 pdf Haikonen Pentti 2004 Conscious Machines and Machine Emotions presented at Workshop on Models for Machine Consciousness Antwerp BE June 2004 McCarthy John 1971 1987 Generality in Artificial Intelligence Stanford University 1971 1987 Penrose Roger The Emperor s New Mind 1989 Sternberg Eliezer J 2007 Are You a Machine The Brain the Mind And What It Means to be Human Amherst NY Prometheus Books Suzuki T Inaba K Takeno Junichi 2005 Conscious Robot That Distinguishes Between Self and Others and Implements Imitation Behavior Best Paper of IEA AIE2005 Innovations in Applied Artificial Intelligence 18th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems pp 101 110 IEA AIE 2005 Bari Italy June 22 24 2005 Takeno Junichi 2006 The Self Aware Robot A Response to Reactions to Discovery News HRI Press August 2006 Zagal J C Lipson H 2009 Self Reflection in Evolutionary Robotics Proceedings of the Genetic and Evolutionary Computation Conference pp 2179 2188 GECCO 2009 External linksArtefactual consciousness depiction by Professor Igor Aleksander FOCS 2009 Manuel Blum Can Theoretical Computer Science come to grips with Consciousness www Conscious Robots com Machine Consciousness and Conscious Robots Portal Artificial consciousness artificial consciousness article in everything2 Multiple drafts model in Scholaropedia Daniel Dennett s multiple drafts model Generality in Artificial Intelligence Generality in Artificial Intelligence by John McCarthy