ERROR LOADING HTML FROM SOURCE (http://ncf.sobek.ufl.edu//design/skins/UFDC/html/header_item.html)

Multiple Draft Machines

Permanent Link: http://ncf.sobek.ufl.edu/NCFE004515/00001

Material Information

Title: Multiple Draft Machines An Investigation of the Implications of Dennett's Multiple Drafts Model of Consciousness for Artificial Intelligence research
Physical Description: Book
Language: English
Creator: Julius, Niklaus
Publisher: New College of Florida
Place of Publication: Sarasota, Fla.
Creation Date: 2011
Publication Date: 2011

Subjects

Subjects / Keywords: Artificial Intelligence
Cnosciousness
Multiple Drafts
Genre: bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: This thesis discusses Dennett's Multiple Drafts model of consciousness with regard to the implications for Artificial Intelligence research. The current state of AI research is discussed, and possible future questions are suggested. Possibilities for achieving an Artificial Consciousness are considered, and an attempt is made to precisely define consciousness as opposed to intelligence. Finally, the possibility of an Artificial Consciousness is established, and unanswered questions are aired for consideration.
Statement of Responsibility: by Niklaus Julius
Thesis: Thesis (B.A.) -- New College of Florida, 2011
Electronic Access: RESTRICTED TO NCF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE
Bibliography: Includes bibliographical references.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The New College of Florida, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Local: Faculty Sponsor: Edidin, Aron

Record Information

Source Institution: New College of Florida
Holding Location: New College of Florida
Rights Management: Applicable rights reserved.
Classification: local - S.T. 2011 J94
System ID: NCFE004515:00001

Permanent Link: http://ncf.sobek.ufl.edu/NCFE004515/00001

Material Information

Title: Multiple Draft Machines An Investigation of the Implications of Dennett's Multiple Drafts Model of Consciousness for Artificial Intelligence research
Physical Description: Book
Language: English
Creator: Julius, Niklaus
Publisher: New College of Florida
Place of Publication: Sarasota, Fla.
Creation Date: 2011
Publication Date: 2011

Subjects

Subjects / Keywords: Artificial Intelligence
Cnosciousness
Multiple Drafts
Genre: bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: This thesis discusses Dennett's Multiple Drafts model of consciousness with regard to the implications for Artificial Intelligence research. The current state of AI research is discussed, and possible future questions are suggested. Possibilities for achieving an Artificial Consciousness are considered, and an attempt is made to precisely define consciousness as opposed to intelligence. Finally, the possibility of an Artificial Consciousness is established, and unanswered questions are aired for consideration.
Statement of Responsibility: by Niklaus Julius
Thesis: Thesis (B.A.) -- New College of Florida, 2011
Electronic Access: RESTRICTED TO NCF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE
Bibliography: Includes bibliographical references.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The New College of Florida, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Local: Faculty Sponsor: Edidin, Aron

Record Information

Source Institution: New College of Florida
Holding Location: New College of Florida
Rights Management: Applicable rights reserved.
Classification: local - S.T. 2011 J94
System ID: NCFE004515:00001


This item is only available as the following downloads:


Full Text

PAGE 1

MULTIPLE DRAFT MACHINES An Investigation of the Implications of Dennett's M ultiple Drafts model of Consciousness for Artificial Intelligence research. BY NIKLAUS C. JULIUS A Thesis Submitted to the Division of Humanities New College of Florida in partial fulfillment of the requirements for the degree Bachelor of Arts Under the sponsorship of Dr. Aron Edidin Sarasota, Florida May, 2010

PAGE 2

Table of ContentsAbstract p. iii Chapter 1: Introductionp. 1 Dennett's Multiple Drafts model of consciousness di ffers importantly from previous models of consciousness, and these differe nces render it a model in whose light the discussion of strong AI makes the most sense. Chapter 2: The History of AIp. 5 The current state of AI research is mostly the resu lt of the pursuit of weak AI, rather than strong AI, and so does not clearly bear on the possibility of strong AI – but the progress made has itself illuminated models of consciousness and particularly fruitful avenues of further research. Chapter 3: Towards Artificial Consciousnessp. 18 Consciousness and Intelligence, as terms, overlap a fair bit, somewhat as a result of the successes of AI in fields like game playing and formal reasoning, but there remain crucial distinctions to be made between an A rtificial Intelligence and an Artificial Consciousness. Chapter 4: Consciousness vs Intelligencep. 28 These differences also hold in discussions of a mor e philosophical bent, mostly turning on the issue of subjective conscious experience (and hence, qualia). Dennett makes some fairly radical moves to clear up the confusion, in the process illuminating what is left over as the really import ant difference. Chapter 5: Human Consciousness – The Multiple Draft s Model p. 40 Given Dennett's explanation of consciousness mostly in terms of intelligence, the Multiple Drafts model has a great deal to offer to the possibility of a strong AI – mostly because Dennett is convinced tha t explaining consciousness consists only of explaining the behaviors we call conscious. Chapter 6: Conclusionsp. 53 Due to Dennett's functionalist approach, the Multip le Drafts model is quite friendly to AI in general – but it leaves una nswered a number of important questions as to what kind of consciousness a strong AI would be while answering the root question of possibility. Bibliography/Works Citedp. 73 ii

PAGE 3

MULTIPLE DRAFT MACHINES An Investigation of the Implications of Dennett's M ultiple Drafts Model of Consciousness for Artificial Intelligence research. Niklaus C. Julius New College of Florida, 2010 ABSTRACT This thesis discusses Dennett's Multiple Drafts mod el of consciousness with regard to the implications for Artificial Intelligence research. The current state of AI research is discussed, and possible future questions are sugges ted. Possibilities for achieving an Artificial Consciousness are considered, and an att empt is made to precisely define consciousness as opposed to intelligence. Finally, the possibility of an Artificial Consciousness is established, and unanswered questi ons are aired for consideration. Approved by Thesis Adviser:______________________________Division of Humanities iii

PAGE 4

IntroductionThis thesis aims to ask, and answer, whether there could be a machine consciousness which worked along the lines of Dennett's Multiple Drafts model, and to ask what the answer to that question means with regard to human and artificial consciousness. Dennett's Multiple Drafts model is a relatively rec ently proposed physicalist theory of how consciousness works, explained in great detail in Consciousness Explained Based largely upon cognitivist views, the central point o f the Multiple Drafts model is that there is no “central point of consciousness” – what Denne tt calls a Cartesian Theater, where conscious experience occurs1. Dennett proposes, rather, that there are “various events of content-fixation occurring in various places at var ious times in the brain”2, and suggests consciousness may be a virtual serial process withi n the brain's parallelism, similar to a serial von Neumann machine being simulated on a par allel machine3. The von Neumann architecture is the “traditional” a rchitecture for computing. All current personal computers are examples of von Neumann mach ines – they possess a Central Processing Unit which can only perform one task at a time, but it can perform many millions of tasks per second. The tasks it performs are, at root, extremely simple, but with complex combinations of these tasks it can perform any algorithmic computing task (in1Daniel Dennett, Consciousness Explained Black Bay Books, 1991, p. 1062Ibid ., p. 3653Ibid ., p. 210-2111

PAGE 5

theory). More recent PCs have “multi-core” CPUs, wh ich technically makes the whole machine no longer serial, but the “cores” are essen tially individual CPUs in themselves, so the architecture remains the same. Parallel mach ines, by contrast, have many “processing units”, all of which operate in paralle l to perform computations. The brain is the prototypical example of a parallel architecture since each neuron is essentially a “processing unit” of limited capacity, but the comb ination of all the neurons operating in parallel results in a tremendous amount of computin g power. Because Dennett denies the existence of a Cartesian Theater, he must explain consciousness entirely in terms of unconscious even ts. To use an example he uses in the book, for a particular thought to reach consciousne ss is akin to a particular person becoming famous – it leaves behind consequences by which it is remembered, and can thus be reported on later. Critics argue that Denne tt, in making this argument, ignores the subjective aspect of consciousness, while Dennett a rgues that the subjective aspect of consciousness as it has been argued by many philoso phers does not truly exist. This certainly seems to support the belief that strong A rtificial Intelligence (henceforth AI), AI which meet or exceed the capacities of human intell igence, is possible, but is also more honest about the difficulties that still stand in t he way of such an achievement. The lack of a central point of consciousness in the Multiple Drafts model also leads to questions about the strong/weak AI distinction that is used today. Strong AI is defined as AI which matches or exceeds the capabilities of hum an intelligence as a whole4 –4Raymond Kurzweil. The Singularity is Near. Viking Press, 2005, p. 922

PAGE 6

presumably being conscious, and capable of all the tasks we perform on a daily basis, from the routine to the complex. Weak (or Narrow) A I is AI which matches or exceeds human intelligence in a particular area, such as cr eating logical proofs, facial recognition, language translation, and so forth5. The strong/weak distinction clearly stems from a view of human consciousness that includes a “central poi nt of consciousness” which would be required for an AI to be truly strong, but the Mult iple Drafts model points towards what one might think of as a collection of semi-independ ent weak intelligences amounting to more than the sum of their parts, and thus being a strong intelligence. The answer to the question of whether a machine imp lementation of the Multiple Drafts model of consciousness is possible would shed light on the possibility of consciousnesses that are different from our own, at the mechanical level6 and at the level we see every day – or, indeed, different at both levels. That possib ility would shed light on issues of ethics surrounding AI, robotics, and so forth, as well as on the workings of our own consciousness. More directly, the possibility of a “Multiple Drafts machine” would point towards fruitful avenues of AI research (should the y exist), and identify avenues unlikely to bear much on successful implementations of AI. More specific questions within the larger question of the possibility of a Multiple Drafts machine exist as well. Dennett provides an evolutio nary account of how the Multiple Drafts model of consciousness might have come to be what it is today, which raises the5Ibid.6By which I mean the level at which Dennett explains consciousness with the Multiple Drafts model.3

PAGE 7

question of whether a Multiple Drafts machine would have to be evolved from more basic component parts, just as our consciousness wa s (according to Dennett). In answering that question, one must ask what (if anyt hing) is imparted to our consciousness by the evolutionary process that we cannot merely s imulate in AI. Dennett's model of consciousness is certainly not t he only model, nor is it even the most accepted. There exists a great deal of criticism of the model, from nearly all sectors. That said, Dennett has provided us with what is likely t he most friendly model to empirical verification and study (indeed, he proposes a numbe r of experiments that could confirm or negate particular parts of his model), has suppo rting evidence from a number of such studies completed both before and after the publish ing of Consciousness Explained, and is as such the most friendly towards development of artificial consciousness. If one decides that, even along Dennett's model, strong AI is impossible, it is highly unlikely that it would be possible along any competing model s. 4

PAGE 8

The History of Artificial IntelligenceArtificial Intelligence, as a field of research, es sentially began with Alan Turing7, but the groundwork for it was laid by thinkers as ancient a s Aristotle, with his system of formal logic8. While there is a long way to go before the field reaches something like the science fiction idea of AI, significant progress has been m ade – although as much of that progress has been illuminating in that it demonstrates mista kes and misconceptions, as it has by making actual progress towards the end goal.Early work that bears on AI is mostly in the domain of formalizing reasoning – systems of logic, the idea of algorithms, and so fo rth. Algorithms are probably the most important feature of this domain – they are by definition finite methods for solving a problem, that are essentially the only wa y for computers can be programmed to solve problems (given current limitations/paradi gms of programming). Algorithms are guaranteed, when followed rigorously, to delive r an answer (and always the right one), to finish in a finite number of steps, and to work for all instances of the problem for which they are designed. Such algorithms do not require ingenuity to execute, and so to some extent by definition cannot be the solut ion to the answer of artificial intelligence (unless, of course, there is an algori thm for ingenuity).7Ben Coppin. Artificial Intelligence Illuminated Jones and Bartlett Publishers, p. 78Ibid. p. 65

PAGE 9

After Aristotle, the field of logic did not see muc h in the way of notable developments until Leibniz9, who proposed the use of a formal mathematical lan guage in logical reasoning. While he did not succeed in the creation of such a language, his work laid the foundation for the development of propositional and predicate logic, both of which are extremely important in AI research today. The other important domain in AI is the development of computing devices, a category into which today's personal computers fall. The mos t important achievement of these devices so far has been the realization of effectiv e machines to operate algorithms, rigorously and quickly, with more success than huma n thinkers. Babbage's Analytic Engine was the first of these devices10 (at least in theory), and as advances in computing progressed so did the scope of the problems which c ould be solved by computers, but more recently the field of AI has begun to recogniz e some problems that cannot be solved by a true algorithm, and thus some exploration of o ther methods of computing has begun. One of the earliest problems of this sort was the p roblem of optimizing searches11, which led to the development of heuristics, which can bes t be described as rules of thumb which will generally (although not always) cut out ineffi ciencies in search patterns. Charles Babbage was the first person to design a tr ue computer, which he called the Analytic Engine. In terms of its logical design, it did not differ significantly from the first general-purpose computers, although the technologic al limitations of its time meant it9Ibid. p. 710Ibid.11Arguably spearheaded by the focus on chess-playing computer programs.6

PAGE 10

was a purely mechanical (as opposed to electronic) device. The entire machine has never been built. At roughly the same time, George Boole was inventing Boolean algebra, which provided a way of expressing concepts such as “x is false” or “y is true” in a sufficiently precise way that they can be used in t he construction of logic gates, which form the foundation of modern microchips.Using this foundation, Alan Turing wrote one of the first papers on the topic of AI, in which he proposed the now famous Turing Test, as a way of determining if Artificial Intelligence had truly been achieved. The test prop osed a functional definition of AI, rather than one rooted in the characteristics of th e intelligence itself – if a prospective AI could fool an observer into believing it was a huma n, it clearly possessed all the functional qualities of humanity. This led to a sig nificant amount of work in the field, aimed at creating an intelligence that could pass t he Turing test, and tremendous optimism about the possibility and proximity of tru e artificial intelligence, although with hindsight that optimism was somewhat rash.As a field of research, AI has since matured, and m uch of the focus today is on solving the problems still present in weak AI12, before moving on to the rather more massive undertaking that is strong AI. Currently, weak AI h as succeeded in some areas rather spectacularly, but in a number of other areas it ha s run into difficulties, especially when the task in question is difficult to precisely defi ne or turn into strict algorithms. A number of domains have arisen as the main fields of AI res earch, including game-playing12Ibid. p. 97

PAGE 11

(dominated so far by discussion of chess, and more recently Go), natural language processing (most progress in this domain has been m ade in machine translation), expert systems, genetic algorithms, and neural networks.The obvious example of an AI success is chess. Huma ns, granted, have not had much competition (except from other humans) in terms of “general game-playing ability”, but the kind of “grand thinking” that is required in ch ess was pointed to as a deeper example of true intelligence than, say, the ability to reas on (which had been formalized and therefore, in a sense, trivialized). The Human/Comp uter chess competition gained notoriety after Deep Blue beat Kasparov, and in 200 6 Deep Fritz beat the reigning World Champion convincingly. Prior to this point, compute rs were already significantly better than human players under timing rules that biased i n favor of quick decision making, but since 2006 they have essentially surpassed the best human chess players, even using the orthodox rules.That said, chess computers do not go about the play ing of chess in a way that is particularly similar to the way that human players do (or, at least, that is how it appears on the surface). Human chess players play, to some extent, on the basis of broad rules of strategy13. Orthodox chess strategy, for example, holds that control of the center of the board is paramount, and so players will make decisi ons with that in mind. Chess playing13There is not, unfortunately, significant understand ing of how the very best chess players think about the game – certainly they think ahead, and look to gain advantages much later, and most grandmasters have a tremendous repertoire of opening games memorized, but the differences in where humans and computers excel suggest a difference in the process of picking moves, at some level. It is possible, however, that the difference is really only a diffe rence in the strength of the heuristics used by computers and human players.8

PAGE 12

computers, on the other hand, examine all legal pos sibilities for the next set of moves extensively, and use an algorithm to attach a “win value” of sorts to the game states that arise after a given move. While these evaluation a lgorithms are in a sense functionally identical to the way human players think about ches s states, the strictly calculating approach does not share surface similarities with h uman thought processes. It was a coincidence that the brute force approach (that is, a computationally expensive but systematically simple approach) of game tree examin ation turned out to be better for chess than the more heuristically refined human met hod. As far as game-playing goes, the focus has shifted somewhat to the game of Go, as a game that is completely impossible, even in theory, to solve14. Most estimates of the number of possible unique Go games exceed the numbe r of atoms in the known universe, so creating an artificial player of Go requires the emulation of the same techniques that the best human players use to make game decisions, which is more difficult than the brute force method used in chess. This is because Go's la rge game board, and extremely simple rules, result in brute force searches being limited by computational power at extremely shallow levels, relative to chess.Early efforts in artificial Go players managed to g et to relatively high levels on the much smaller game board (9x9), but on the orthodox game board (which is 19x19), the best14In strict terms, even checkers has yet to be solved For a game to be truly solved, all positions that could result from legal play must be analyzed. Checkers h as been solved in the sense that, with perfect play on both sides, the game will always end in a draw. Che ss has yet to be solved even in this weak form, but it is vaguely accepted that such a solution is, in the ory, possible. The sheer simplicity of Go's rules, however, renders the game subject to tremendous combinatorial explosion – suggesting that even a solution to perfect play (which has yet to be defin ed rigorously in itself) is impossible in theory.9

PAGE 13

programs today are equivalent to players ranked som ewhere between intermediate and advanced amateurs. Recent results still have the be st human players consistently defeating the best artificial players despite fairl y significant starting handicaps on the 19x19 board, and maintaining a winning record (alth ough losing some games) on the 9x9 board.The most recent advance in artificial go programs h as been the development of “MonteCarlo search methods”, which are (to vastly oversim plify), search methods that make use of random sampling to deal with calculations where there is significant uncertainty. These methods have improved artificial Go play significan tly, but they have a tendency to miss very strong, yet isolated, single moves (presumably because it is very easy for a random sampling to miss them). This weakness is illuminati ng – it suggests that the Monte-Carlo methods share some similarities with how human play ers think about strategic decisions (where Monte-Carlo is strongest), but it suggests t hat something is crucially different when dealing with tactical decisions (where humans almost never miss the isolated, but strong, moves). The most obvious conjecture would b e that humans are using some kind of pattern recognition heuristic (or something that functionally works the same) to direct their sampling, rather than sampling completely at random. It is predominantly this weakness in the smaller, tactical game that is expl oitable by good human players, as games of Go, like chess, can be lost with a single bad move. In other fields, the successes and failures of AI h ave been more ambiguous. Natural 10

PAGE 14

language processing (NLP) is one such field, concer ned with the processing of natural languages (English, French, and so forth). Language is a faculty with which humans have peculiar ease, given the complexities and ambiguiti es natural languages play host to – the ability to continue reasoning effectively in the pr esence of such ambiguity is a demonstration of the resilience of human intelligen ce, far removed from the intelligence usually displayed by computers. Like the fields of chess and go, NLP divides into quite a few sub-problems, each of which is rather interesti ng in itself. The first is segmentation, which is a problem that applies to both spoken lang uage and text (although only to languages with no textual end-of-word denotations). This is a non-trivial task for computers – in natural speech, letter sounds blend together, there are almost no pauses between words, and those pauses that do exist can h ave grammatical and semantic meanings in themselves. The conversion of an analog signal (speech) to discrete characters is thus very difficult, as is the conver sion of a full string of words into individual words, and the latter becomes more diffi cult when it attempts to preserve the meanings conveyed by pauses.One of the earliest goals in NLP was automatic tran slation, which at least avoids the problems inherent in processing natural speech, but still proved to be quite difficult. Programs intending to translate had to be able to d eal with morphological analysis, syntax, semantics, and pragmatics15 of two languages. Pragmatics and semantics15 Morphology concerns itself with the way words brea k down into components – for instance, the appending of 's' to a word can indicate that it is a plural noun or a present tense verb. Syntax deals with, mostly, grammar, determining the role of each word in the whole sentence. Semantics, then, deals with the meaning of the sentence, and has proven to be o ne of the most difficult areas of NLP. Finally, pragmatics is concerned with the application of lan guage in 'real use'. For instance, while a technica lly correct answer to 'Are you talking about me or him? is 'Yes', most humans realize that the sentence i s11

PAGE 15

especially force NLP systems to have a good deal of real-world knowledge, and this requirement on its own is still a fair distance fro m being met to such an extent that human-esque facility with language is possible.The most popular current method for dealing with mo rphology is to have a set of rules that apply to the majority of the language (such as “the suffix -ly makes the word an adverb”), and then to include a set of words that d o not follow the rules (the verb “to be” is guilty of this). Even this requires a tremendous ly large set of words for the latter part if the system is intended to deal with language in all possible domains. Syntax parsing is difficult mostly in that there ex ist a number of possibilities in natural language, and thus the computational complexity of parsing any given sentence makes it an expensive task – but, essentially, the method in volves trying to convert a sentence into a more ordered sentence where the role of each word is clearly known and understood (which makes it more similar to a programming langu age). Semantic analysis is mostly concerned with disambig uating sentences – something even humans can have some trouble with in English. Lexic al ambiguity (which role is a word taking) is often the simplest, and can sometimes ev en be solved by the syntax parser. Referential ambiguity is an ambiguity that troubles humans as well, and we are taught early to avoid sentences in which there is referent ial ambiguity, but systems must be able to at least recognize it to prevent hanging on the ambiguity. Syntactic ambiguity andreally asking for a concrete one-way answer.12

PAGE 16

semantic ambiguity can both only be solved with wor ld knowledge, and are as such the most difficult to solve. World knowledge presents a storage problem, and so to some extent limits NLP programs to certain domains, wher e their world knowledge is sufficient, and renders them utterly incapable of d isambiguating sentences from outside that domain.In some ways, the more difficult problems of NLP be come less important when dealing with machine translation – after all, the simplest form of machine translation simply translates word-by-word from a dictionary, which is a trivial task. Translations of this sort tend to fall apart once they have to deal with sent ences, as rules of grammar are not identical across languages, and in some cases even disambiguation is still needed (although in some cases the ambiguity carries over with the translation without requiring a solution). Currently, disambiguation poses such a problem that, as far as translation goes, machine translation is most useful as a way o f easing the labor burden on human translators – the NLP program generates a first rou gh pass at the translation, and then the human translator finalizes it, mostly by dealing wi th incorrect disambiguation. The methodology used by NLP programs is more similar to the human methodology than in most other fields of AI research – the dominant dif ference is the difference in world knowledge, where humans still hold a tremendous adv antage. We still process language semantically, syntactically, and pragmatically, and at least with regard to syntax we don't have access to much more than computers do as far a s information to aid the process. 13

PAGE 17

Expert systems were a reaction, of sorts, to the re alization of the importance of world knowledge. The basic idea is that, if you combined a formal reasoning system with a tremendous amount of world knowledge, you could ask it questions about its domain, and it would reason with its stores of knowledge to det ermine the answer – presumably quicker than another human, given the facility with which computers handle formalized reasoning. Applications viewed as clear uses for th is kind of technology were, for example, diagnosis systems to aid doctors in making an accurate diagnosis16. While expert systems have proven to be more successful th an most AI technologies at finding practical everyday application, their importance in continued AI research has proven merely to be an example of a fruitless path – exper t systems are essentially more elaborate versions of decision logic, with large da tabases attached. They mirror the way humans reason only when humans choose to reason in strict and formalized settings, not in the more difficult (and interesting) settings mo re prevalent in the real world. Genetic algorithms are less a type of AI developed than a way of programming solutions to problems that are not clearly finite. They work by generating random solutions to a given problem, and then evaluating the success of e ach solution. These solutions are described by a “chromosome” of genes, coding for pa rticular features of the solution – the most successful chromosome in a particular gene ration of solutions is saved, and used as the baseline for the next. There exist forms of genetic algorithms that use breeding functions as well, crossing, for instance, the two most recent baseline chromosomes to16Some market penetration was in fact seen for this t echnology, although not to the extent originally predicted in the early throes of AI optimism.14

PAGE 18

create the next one, instead of merely selecting fr om the previous options. Such algorithms are applicable anywhere that a given sol ution to a problem can be described in terms of a chromosome of genes, although they are m ost effective in solving problems with numerous good answers, rather than one clear b est answer, and other features which render the more traditional solution methods ineffe ctive. Genetic algorithms generate approximately optimal solutions, rather than the tr ue optimal, but they do so efficiently – examples of such arenas are timetabling and schedul ing problems. It is worth noting, however, that humans have no particular aptitude fo r such problems either, so genetic algorithms are in the fairly unique position of doi ng something neither traditional computers nor humans can do well.Neural networks are perhaps one of the most promisi ng new technologies that resulted from AI research. They operate by attempting to mim ic the architecture of the human brain, creating artificial neurons and massing them together, then using them to solve problems. Such networks have proven to have surpris ing prowess in areas not normally considered strengths for computers, especially in l earning to deal with problems that have elements of pattern recognition within them. It is interesting to note that because of the way neural networks generate output, there is the p ossibility of strong interplay between them and genetic algorithms – neural nets show part icular strength in identifying “relevant” features of a solution17, which can then be identified and codified in a chromosome, for further optimization by a genetic a lgorithm.17It has been argued that this is due to neural netwo rks providing a way of 'theorizing' about a problem and then an objective way of testing that theory fo r effectiveness. “Normal” computers must have a theory, of sorts, programmed into them from the beg inning, and if that theory is faulty the program wi ll always be faulty.15

PAGE 19

Kurzweil has argued that the major roadblock to str ong AI is that of processing power18 – the human brain, by all accounts, has vastly more r aw processing power than any known computer, although that power is distributed in suc h a way as to render some tasks much more difficult than they would otherwise be (comple x math, for instance). This claim is somewhat dubious, however – the continued controver sy surrounding theories of consciousness clearly illuminates that there exists limited understanding of exactly how consciousness works, and what its components really are. In a sense, it is the software side that has truly eluded us – we cannot code for something we do not already understand (that is, in a sense, precisely the poin t of programming languages). Without an understanding of consciousness, without a way of pr ogramming it that is known to work, the only way that Kurzweil could be correct is if consciousn ess is merely an emergent feature achieved by a critical mass of processing p ower, and one that changes in features (presumably) as that mass continues to increase. Wh ile this is, at least at first, a plausible argument, it misses the key point that has directed much of the discussion of what consciousness is, as opposed to intelligence – the issue of the “doer”, which will be dealt with in much more detail later. The existence of su ch a doer, even having done away with Dennett's Cartesian Theater, suggests that consciou sness requires some kind of overarching architecture, or at the very least something which guides the processes that compose a consciousness to produce results benefici al to the organism/system in question. Unless one believes in the theory of cons ciousness as merely emergent, the conclusion that must be drawn is that there remain significant strides required in both18Raymond Kurzweil. The Singularity is Near Viking Press, 200516

PAGE 20

hardware and software sectors to make progress towa rds truly strong AI19.19It bears mentioning, however, that this assumes the best path towards AI is to emulate human consciousness. It is possible, although unlikely, t hat some kind of consciousness might be stumbled upon from another path of thinking entirely, result ing in a consciousness that was significantly diffe rent, internally (and presumably externally), from ours.17

PAGE 21

Towards Artificial ConsciousnessThe terms 'artificial intelligence' and 'artificial consciousness' overlap by a fair amount, but it is difficult to truly extricate them from ea ch other. When one says “Human intelligence”, consciousness is often a fairly inte gral part of what is being referred to, and vice-versa. In some ways, the Strong/Weak AI distin ction gives us instead a distinction between merely artificial intelligence and a true artificial consciousness but we could hardly claim that the artificial consciousness wasn 't also intelligent. That said, the field of AI research has quite a dis tance to go before one needs to worry about the overlap between artificial consciousness and intelligence – by the admission of its best researchers, AI hasn't even come close to figuring out how to make a strong AI, let alone actually making one. Even so, the progres s made so far is useful in both what it has shown about how humans do some of the things th ey do, how they don't do it (perhaps more importantly), and how best to do it a rtificially. Dennett makes much use of a particular example from the field of AI research – Shakey the robot. Shakey was, to quote Dennett, “a box on wheels with a television eye, and instead of carrying his brain around with him, he w as linked to it … by radio”20. Shakey was kept carefully isolated from the complexity of the real world, and lived in a few20Daniel Dennett. Consciousness Explained, Black Bay Books, 1991, p. 8618

PAGE 22

rooms whose features were quite simple – boxes of d ifferent colors and a few different shapes, well lit to make the task of vision easier for the robot. Shakey communicated with the researchers by way of a very restricted English vocabulary, with which researchers could direct Shakey to, say, “push the red box off the blue one” – which Shakey would interpret as best he could, and proceed to attempt.Shakey was one of the more illuminating forays into weak AI, demonstrating as he did a number of our faulty ways of thinking about the tas k of vision. When one really gets down to it, the task of vision is very complex, and it seems nearly impossible to explain Shakey's abilities without resorting to analogs to Cartesian theories of mind – but, Dennett assures us, it is possible to do. Shakey's method of seeing, to simplify a fair amount, was to sharpen the images received by the t elevision eye successively, until the “brain” could reduce the image to a line drawing. O nce this was accomplished, Shakey could see various vertices, each identifiable as ei ther an L, T, X, Arrow, or a Y vertex. An ingenious scheme of vertex semantics allowed Shakey to then determine which objects could possibly fit the vertices observed.Importantly, however, Shakey never really looked at a line drawing. A monitor on which the process actually did take place was provided fo r the benefit of observers, but there was no internal monitor at which Shakey looked (wit h, presumably, another television eye21). Instead, Shakey could create an internal, albeit virtual, line drawing by analyzing the binary data delivered by the television eye, ut ilizing a set of vertex identification rules21And so the regress looms.19

PAGE 23

that could identify vertices in sequences of such b inary data. This process of vision is not, Dennett thinks, at all like the process of vision i n a human, or, indeed, in any living creature22. Indeed, the process could only work (at least, as described) with a field of vision that had straight edges, and right angled co rners, and was purely black and white. The processes needed to perform an analogous proces s of vision on, say, the visual data collected by a single human eye would have to be si gnificantly more complex, and significantly more numerous. But Shakey is useful t o us even so, because it shows at least one way in which a process that seems irreducibly c omplex at first glance can be reduced to a series of processes, each of which is most ass uredly stupid. Another important foray into Artificial Intelligenc e was SHRDLU, a computer program developed by Terry Winograd, which abstractly explo red the information-handling tasks that any interlocutor faces. While not a robot, SHR DLU had an imaginary world that was quite similar to Shakey's – a very simplistic world populated by blocks and cones, with a finite number of possible colors and combinations, and a limited vocabulary consisting of some verbs and adjectives. SHRDLU's world, while im aginary, had basic physics and obeyed the rules it would have obeyed were it imple mented in reality. SHRDLU, then, could be instructed to perform a number of tasks in its imaginary world (such as placing cones on top of blocks, and so forth), and could th en be asked about what it had just done. SHRDLU could also be taught things within its world (for instance, that a cone placed on top of a box was a steeple), and could th en reason about the new objects within the scope of its imaginary world. SHRDLU was not co ncerned at conception with22Ibid. p. 9220

PAGE 24

emulating the way humans perform the same tasks whe n taking part in a conversation or reasoning about their actions, and so did not mirro r the human process at all well (indeed, SHRDLU could probably be explained as having a cent ral meaner that determined what should be said).SHRDLU was a tremendously successful example of ver y basic reasoning and language tasks, although it has yet to be improved upon with the same level of success. Dennett makes use of SHRDLU to create an imaginary robot, a Shakey-SHRDLU hybrid, which he then asks about the process of seeing. By propos ing possible answers to the question “How do you tell the boxes from the pyramids?”, Den nett gives us a situation in which there are a number of answers that are all very dif ferent in what they say, but all of which match, at some level, what actually went on when Sh akey was discriminating boxes from pyramids. Dennett then proposes that we could even set Shakey up to give us an answer that didn't match, in any way, what actually went o n – the answer would be merely confabulated, but Shakey would not know it was confabulating it This has already been shown to happen in experiments on people – in the a bsence of sufficient data to explain what happened, they will confabulate a story that m akes sense to them. They don't lie about what they experienced – what they say was what it was like to experience what they did, but it is not a factual account of what a ctually went on inside the brain. While Shakey and SHRDLU are both hugely successful examples of AI research shedding light on human intelligence, AI has yet to enjoy much success in mirroring 21

PAGE 25

closely the abilities of human intelligence. The va st majority of everyday tasks humans complete with little to no difficulty continue to p ose problems for AI researchers. The success seen with chess-playing AI and similar impl ementations is as much a result of the nature of the games humans play as they are results of AI progress23. One of the more common examples given of a persiste nt failure of AI is the task of facial recognition. While significant advances have been m ade in very recent years, artificial facial recognition programs perform worse than the average human in the majority of cases, although in a minority of cases the result i s the opposite. When faced with well lit, unobstructed, full frontal access to a face, the mo st modern facial recognition systems work fairly well – but, for instance, as the viewin g angle approaches profile, significant problems emerge24. The same is true for poor lighting conditions, ob structions (sunglasses, for instance, or long hair), or even v ariable facial expressions. In particular, the difficulties encountered with profile viewing a ngles seem to indicate that, despite their successes, facial recognition systems don't r ecognize faces in the same way as humans do, as one of the more apparent features of the human facility with facial recognition is the ease with which we recognize a f amiliar face from an entirely novel angle.23Games are, almost by definition, taking place in a limited subset of 'reality'. Programming a computer to play chess is significantly simpler than programmin g it to react to, say, a novel sentence – every possible chess state is (in some sense) known, and rules can be given to the computer that will make sense when faced with every possible chess state – but a novel sentence need not necessarily be composed of familiar words, or composed in the same way as another sentence with identical meaning.24Mark Williams. “Better Face-Recognition Software”( http://www.technologyreview.com/Infotech/18796/?a=f ) Retrieved 04/07/201022

PAGE 26

It is possible that the difficulties encountered by AI researchers once a certain level of progress was made indicate a systematic weakness in the direction AI research has been taking. The majority of AI research to this day has taken place on traditional von Neumann architectures, where the computer is only c apable of performing one task at a time before moving on to the next, while the human brain is much more similar to connectivist networks, with massively parallel comp uting power. This has already manifested in the differences noted between the ave rage personal computer and a human – where humans encounter difficulty with complex ma thematics, the same tasks are trivial for the weakest of personal computers – bec ause the raw computing power available to a personal computer is more suited for such tasks. However, a different kind of computing architecture exists, which has enjoyed more success in some of the areas where “traditional” computers experience difficulty These connectivist systems (of which neural networks are an example) are better at extracting the method from the madness in very complex scenarios, as is manifested by successes encountered in training neural networks to perform tasks like driving, mere ly by demonstrating to them how to drive properly and allowing the network itself to l earn what the important features of driving properly were.The predominant feature of connectivist systems is that their computing architecture is much more similar to the human brain than tradition al von Neumann machines25 instead of a single powerful processing unit, where everyth ing gets done, these networks are25Von Neumann machines are the 'ordinary' personal co mputer – a single processor can only do one thing at a time, although it does things so fast it can d o many thousands of things per second.23

PAGE 27

made of layers and layers of “neurons”, programming constructs that simulate the behavior of real neurons (to a limited degree). The se neurons are interconnected, and each individual neuron's behavior is extremely simp le to explain and predict. Because of the close relation to the “neural networks” that ap pear naturally, these systems are often called neural networks.Each neuron can take many inputs, and can produce a n output. In the brain, these inputs are electrical impulses from other neurons, while i n artificial neural networks, these inputs are, essentially, numbers from other neurons (or, in the case of the input layer, numbers from whatever provides the first input). In both cases, the neurons measure the input in some way, and if the input reaches a thres hold level, the neuron fires and produces its own output (which can either be fixed, or be a function of the value of the input). This output then goes to whichever neurons on the next layer take that output as their input, and the process continues until it rea ches the output layer, where the outputs are finally displayed.This architecture can do pretty much anything a tra ditional von Neumann machine can – although the strengths of the neural network archit ecture differ, and so the difficulty with which each architecture can perform a given task is not the same. What makes neural networks special, however, is that, with the advent of back-propagation algorithms, neural networks can be taught how to do things. In essence, as long as there exists an actual method, and as long as the teacher has a set of correct outputs, a neural network 24

PAGE 28

should be able to learn the method of producing cor rect outputs from inputs. For instance, with the example of the neural network learning to drive, one might present the network with a particular road situation. In that particula r road situation, a good human driver could be expected to take a particular action, cons istently, in response to various features of the situation. To teach a neural networ k how to drive, then, one simply gives the input and lets the neural network give the firs t output (which is essentially random) – which is then compared to the desired output, and, through the use of the backpropagation algorithms, the neural network can chan ge features in the hidden layers between input and output, which would have made it give the correct answer in response to that particular input. A different input is then given, and the process repeats. With sufficient time, the neural network will discover t he important features of any given input, and render its hidden layers in such a way t hat those features are reacted to properly, and irrelevant features are not.This process must be repeated multiple times – the back-propagation algorithms do not instantly hit upon the correct set of weightings ba sed on only a single example of error. In a sense, the back-propagation algorithms allow neur al networks to experiment when they realize that they have made a mistake, but they do not offer much guidance in that experimentation behind a rather vague “too heavy” o r “too light” with regards to only the weightings that led to a particular output vector.Neural networks, while modeled on the brain, are no t particularly accurate models. There 25

PAGE 29

are a number of differences between real neurons an d the artificial neurons, to start with. Real neurons are somewhat plastic, capable of chang ing their own threshold values or output formulas, and the connections between the ne urons are also plastic, and rapidly changing (Churchland, 1988). The use of layers in a rtificial neural networks is not analogous to anything in the brain, but rather a wa y of working around difficulties inherent with trying to compute in a fashion that i sn't temporally organized in some way. It is theoretically possible, of course, to create an artificial neural network that can do everything the brain can, just as well, but current understanding of neurons is itself incomplete, and neural networks as researched in th e field of AI are not made with the goal of accurately reproducing the kind of computat ion that goes on in the human brain, especially given the tremendous amount that has bee n learned merely by copying the architecture.It is remarkable that neural networks can do what t hey do, based mostly on the architecture of the network itself. They are, in th eory, capable of discerning a pattern as long as a pattern exists, and sufficient training d ata is present, which is itself impressive – but even more impressive in the light of the tremen dous difficulties traditional von Neumann machines experience with pattern recognitio n tasks. Why is there such a difference, when the major difference between the two machines is architectura l, rather than methodological? 26

PAGE 30

Churchland suggests one possible explanation, focus ing on the addition of hidden layers (the very first neural networks were composed of si mply input and output layers, and were very quickly found to be extremely limited). T he problem, as Churchland sees it, with the first neural networks, was that they deter mined, by the very structure of the network, what features of any given input were impo rtant, before any computation had been done. This, rather obviously, precludes any tr ue pattern recognition from even taking place. With the advent of hidden layers, how ever, the system is capable of theorizing about patterns – in the process of chang ing its various weightings in the hidden layers, it can “stumble onto regularities that lie behind or underneath the superficial regularities that connect the features that are exp licit to the input vectors”26. This, especially when combined with back-propagation algo rithms, is in large part responsible for the success neural networks have with pattern r ecognition. One of the questions this success has raised with r egard to human consciousness is related to the success of the back-propagation algo rithm. Does the brain have such a thing, or an analog? There did not, initially, appe ar to be any evidence to support the existence of channels for the back-propagation of e rrors in the brain. More recently, some discoveries of new input systems in the brain have at least given us a place to look for these analogs to back-propagation, but they have ye t to be found. Perhaps, therefore, the brain uses some other learning procedure, or perhap s the back-propagation takes place on a much more abstract level than in the case of neur al networks27.26Paul Churchland. Matter and Consciousness The MIT Press, 1988, p. 16427We are certainly capable, for example, of conscious ly looking at a problem (say, a math problem) on which we got the wrong answer, and looking through our process to find the place where it all went wrong. In other examples, the very error itself giv es an indication to us, consciously, of where we we nt27

PAGE 31

Consciousness vs IntelligenceAs the previous chapter indicates, there exists som e tension between the terms “consciousness” and “intelligence”. At times, these terms seem to be almost interchangeable, and at others, they are strongly d ifferentiated. Consciousness has been used as an umbrella term that clearly incorporates significant intelligence, and at the same time as a term that denotes something very differen t from intelligence. For the purposes of this chapter, they will be taken to mean differe nt but possibly interdependent things. Consciousness, while not a term with a clear consen sus definition, can be described as the subjective experience each person has28. It is difficult to be more precise without stumbling into at least one major philosophical arg ument, but most everyone has some general idea of what consciousness is that does not immediately appear to be interchangeable with intelligence. Also important i n the description of consciousness is the reflective aspect – the ability we have to acce ss internal data and report it to ourselves or to outsiders, to “watch over” processes going on within us, and so forth. This distinction comes out in the distinctions we make b etween, for instance, being awake and being in a dreamless sleep (with dreaming sleeps be ing something of a middle ground). wrong – this is a large part of debugging in comput er programming. While this seems unlikely to be the mechanism by which we recognize patterns (since we don't consciously theorize about patterns and discard faulty ones, at least most of the time), it is certainly an option worth exploring for pattern recognition more generally.28Although the term subjective will cause some proble ms. Dennett has denied qualia, and so exactly what comprises the subjective experience is less clear, but Dennett still thinks it exists.28

PAGE 32

Intelligence, while still a rather vast umbrella te rm, refers more to a set of functions within us (and in other animals, as well, in differ ent forms). Common examples of intelligence include the capacity to reason, to sol ve problems, to use language, and so forth. There is, again, little consensus on what fu nctions, exactly, comprise intelligence, but the general idea is there.Intelligence does not, immediately, seem to require consciousness in any way. One can make the argument that reasoning, problem solving, the use of language, and all the other functions are reducible to algorithms, the executio n of which could be left to a traditional computer, to which almost none of us would ascribe consciousness. That said, almost none of us ascribe intelligence (as opposed to cons ciousness) to such machines either, often on the basis of something that points to a la ck of a “doer” who's actually executing the algorithms29. This objection points to some kind of conflation of consciousness and intelligence.This objection has lost some of the popularity it u sed to have, as a result of a historical shift in the view of consciousness. Developments in the field of formal logic, and the development of the field of cognitive psychology, b egan to bring out these unconscious processes that were clearly not conscious while res ulting in an end result that we would typically attribute to intelligent processes. Cogni tive psychology, as a term, was coined29This, to some extent, relies on the Bureaucratic mo del of the brain, with an executive in control of everything, which Dennett dismisses. With current p ersonal computers, the difference is not there (in the case of both brain models, the computer fails t o have a “doer”), but it will become an important difference as computers continue to approach human levels of capability.29

PAGE 33

by Ulric Neisser to cover a psychology that charact erizes people as dynamic information processing systems, and that attempts to describe t heir mental operations in computational terms. Formal logic, too, seeks to fi t questions into a rigid structure that allows their solutions to be calculated algorithmic ally, in a way that can be accomplished without the intervention of ingenuity (which is oft en considered a part of consciousness). Numerous experiments have demonstrated that, when f aced with a particular task, we execute the task using a methodology which we do no t realize we are using. In a 2007 paper, pupil size was found to modulate perceptions of sadness (and other emotions)30. Pupil dilation never entered into the subject's own explanations, but was clearly part of the decision-making process, even if it was unconsc ious. This is a clear example of a nonconscious process that results in something we tend to call intelligence, or at least a manifestation of a particular kind of intelligence. What seems to make it an intelligent process is that it is a process within a larger one and that larger one is directed at a clear goal – a goal which is, importantly, internal to th e agent activating the process. The same type of system in computers, currently, cannot be d irected to an internal goal of the computer, only towards (at the most charitable) the external goal of the programmer, which reflects what might be described metaphorical ly as the computer's best interests. Historical shifts had been occurring before such de velopments, as well. At one point, calculation was looked at as a clear example of som ething unique humans could do, and30 Harrison N.A., Wilson C.E., Critchley H.D. “Inciden tal processing of pupil size modulates perception of sadness and predicts empathy.” Emotion vol. VII, 4, November 2007, pp. 724-930

PAGE 34

attributed to consciousness/intelligence (the mix b etween the two was even greater then than it is now). But it was discovered that additio n, and indeed all of arithmetic, could be done purely mechanically. Calculation, then, was pr oven to be mechanically possible, if only in theory, and so ceased to be sufficient as a distinguishing feature. “Great Thinking” then became that feature, as exemplified by the thi nking demonstrated by the Grandmasters of chess – until that, too, “fell” to the machines, and today we are left with merely a tension between “intelligence” as it is as cribed to such machines and a somehow different “intelligence” to which we believe we hol d exclusive rights. Intelligence in the sense that is often conflated s omewhat with consciousness might be described as a “goal-directed manipulation of menta l content” it is the goal-directed clause which tends to leave current weak AI out of this definition, and allows for it to be applied to animals that have less complex conscious nesses but still demonstrate intelligence. This brings in the issue of intention ality fairly rapidly – our goals which direct intelligent are analogous to desires, and th e mental content manipulated in pursuit of these goals/desires are analogous to beliefs.Dennett, in Brainstorms takes an intentional tack to answer this dilemma. This answer depends on his theory of the intentional stance, a stance/method we adopt in order to explain and predict the behavior of what Dennett ca lls "intentional systems"31 (Dennett believes humans are one instance of such systems). 31Dennett, Brainstorms. The MIT Press, 1981, p. 631

PAGE 35

Intentionality is a term coined first by Jeremy Ben tham, but perhaps most famous for its use by Dennett in The Intentional Stance a book in which Dennett sets out a theory of mental content. The intentional stance is, for Denn ett, a level of abstraction which we use to view the behavior of a given thing in terms of i ts mental properties. To use the stance is to attribute to the thing in q uestion the status of a rational agent, to attribute to it beliefs and desires (given its plac e in the world and its purpose), and to assume that said rational agent will act only in wa ys which will further its goals, in light of its beliefs. This leaves us with the ability to predict what the thing will do, based on what it ought to do, as determined by use of the intentional sta nce. For Dennett, to be a subject who has beliefs and desires is precisely to be an object fit for extensive intentional-stance prediction.However, Dennett's intentional stance has taken a f air amount of criticism for what some have called “explaining away” the interesting parts of intentionality. It seems to matter to us that an object has an inner life or does not. Th e intentional stance can, without much difficulty, be applied to a thermometer to predict and “explain” its behavior, but none of us would consider a thermometer to be intentional i n the sense of having mental content that was about something in the world. Indeed, acco rding to Dennett's account of intentionality there is no intentional difference b etween a philosophical zombie and a truly conscious human32. Philosophical zombies are hypothetical entities w ho are, by definition, unconscious, but display all the functi onal attributes of consciousness –32Dennett. Consciousness Explained Black Bay Books, 1991, pp. 405-632

PAGE 36

obviously, for a functionalist like Dennett, this d efinition itself is where the problems such zombies pose for his theories reside.Dennett's answer, then, to the problem of the inten tional thermometer is that the entirety of its behavior can be predicted and explained on t he basis of a stance well below the intentional stance. Moving from what Dennett calls the design stance (which concerns itself with the biological and engineered features of things) to the intentional stance grants us no additional traction on the problem of explaining or predicting the behavior of a thermometer. There is no reason to attribute to t he thermostat desires and beliefs, so we can conclude in some sense that it has none.This, in some ways, is analogous to what Dennett do es with the center of narrative gravity (to be explained in more detail later). Int entionality can be construed as a theory that takes the attribution of mental content at the intentional level to be abstractions, useful and operationally valid but abstractions all the same. This can be seen as both a criticism of the theory (it certainly doesn't seem to us that our beliefs or desires are merely “a theorist's fiction”), or an empirical val idation (in some sense, every level of stance, except the physical stance33, is an abstraction of the stance below it, and thu s all stances except the physical are abstractions of wha ts “really” happening). Intentionality, playing as it does on the “aboutnes s” of mental content, is in some sense33The physical stance attempts to predict and explain things on the basis of physics and chemistry, concerning itself with such things as mass, energy, velocity, and chemical composition. Predictions of for instance, the end-point of a ballistic trajecto ry are made with the use of the physical stance.33

PAGE 37

required for intelligence, in the case of definitio ns of intelligence that turn on what we will term the gainful employment of intelligence – the “gainfulness” of a particular employment of intelligence requires that the agent gain from the employment, and “gain” only makes sense when considered in light of an age nt that has goals, and acts to further them, which it must do in light of some beliefs. In this sense, the intentional stance homes in on what is crucial about “real” intelligence, an d leverages that to gain its predictive and explanatory power.Intentionality is important in all of consciousness in the sense that all consciousness takes place in (or on?) mental content, and Intentionalit y deals with mental content specifically. That said, not everything about consciousness can b e straightforwardly reduced to a discussion of intentionality, or an “intentional st ance machine”. Certainly, Dennett would argue that a strong AI would be an agent with which we would use the intentional stance, and would gain significant traction in doing so. In deed, Dennett already argues that we have to use the intentional stance with some weak A I programs – notably chess playing programs, against which the only successful strateg y has been to treat them as a highly competent human player (albeit with some peculiar s trengths and weaknesses)34 applying as a part of that strategy higher-order intentional stance predictions to it. It is less clear what Dennett would argue in the case of weak AI in general – the attribution of beliefs and desires to a machine that, quite clearly, does not have them (although this might be34The traditional view of how to play against chess c omputers is that they get progressively weaker at evaluating moves further into the future – which is of course true of human players to an extent, but less so. Thus, the 'anti-computer' chess strategy focuse s on extremely conservative early play with a focus on developing an overpowering advantage very late i n the game, such that the computer does not realize what it needs to counter until much too late.34

PAGE 38

where Dennett would begin his argument), does not i mmediately seem as if it would grant traction on the problem, but the gray area cr eated by our own unconscious processes that result in “intelligent output” as it were creates a gray area for weak AI as well.But chess machines have never been described as con scious, nor would we have any qualms about recycling an old one for parts – chess machines demonstrate no aspects of what we call consciousness. There is no obvious “do er” to which we can ascribe “real” beliefs and desires, except the program as a whole35 although chess machines are considerably closer to having a clear “doer” than m ost examples of artificial intelligence. Again, this issue turns to some degree on the issue of internal purpose as well – chess machines can best be described as limited intellige nces that possess no internal purpose of their own, but are rather purpose-built for an e xternal purpose. The question in its most trivial form is whether or not a chess-playing mach ine has “conscious experience” – it is clear that, if it does have such experience, the ex perience must differ from that of humans in ways related to the ways in which chess-playing machines differ from chess-playing humans, within that domain. Dennett encounters thi s problem in attempting to differentiate between “persons” (a category that is apparently ethical in scope) and the wider group of intentional systems, and puts forth one possible method: higher-order intentionality36.35This is, of course, also true of a consciousness th at follows Dennett's Multiple Drafts model of consciousness.36Dennett, D. Brainstorms The MIT Press, 1981, pp. 273-27535

PAGE 39

One of the features we as intelligent beings have i s the ability to take the intentional stance in order to explain and predict behavior. It is thus necessary, in some sense, for other people to attribute to us this kind of second-order intentionality to predic t and explain our behavior. This attribution of the intentional stan ce to another (as opposed to just the use of that stance with regard to them), D ennett believes, is the best route to take in looking for “necessary conditions of personhood” (Dennett, 1981), and thus is clearly a route he believes locates the important features of consciousness that themselves lead to “personhood”37. Crucially, this is at the very least trivially true of chess-playing machines. Chess is characterized by players who look forward ten or mo re moves, attempting to make a move now that will result in an advantage significantly lat er. Predicting a computer's next move thus requires that one have some idea of the c omputer's “overall strategy”, in order to predict which of many possible responses (for th e sake of argument, all equally rational in the space of just 2 moves) will be chos en. Indeed, partially as a function of the fact the game in question is chess, we have to attr ibute to the computer the same attribution to ourselves. This, perhaps, is more im portant than that we have to attribute to them the intentional stance – once we have to assum e that they, too, are capable of using the intentional stance, we have to assume as a part of that that they understand (at least) or have (at most) beliefs and desires themselves – which then further closes the gap37Personhood is importantly different from consciousn ess, at least as Dennett argues it. Personhood is a category Dennett uses to deal with explicitly ethic al issues, and bears to some extent on issues of an imal ethics. Consciousness is a technical term that does not necessarily correlate exactly with personhood, but the attribution of personhood tends to carry wi th it an inclination to attribute some form of consciousness as well.36

PAGE 40

between them and us. If we made the stronger versio n of the assumption, assuming that they actually have beliefs and desires, we would ha ve, in a sense, made the “doer” we were looking for – what else could that thing that has beliefs and desires be? AI researchers have neatly sidestepped the entire a rgument by defining Weak AI as precisely that kind of intelligence we hesitantly a scribe to computers – an intelligence that is focused on a particular task, and can execu te it, but does not have a more general intelligence that indicates when to execute it, or something along those lines. This kind of intelligence does not require consciousness in the sense that it is “pre-focused” by the programmer, but it is also unclear how much it dese rves the more general title of intelligence. Strong AI more clearly deserves the t itle, but also seems to deserve the title of consciousness, because it is by definition capab le of focusing itself on particular tasks, a task which itself tends to be conflated with cons ciousness rather than intelligence. At this point, one might question whether conscious ness is required even for the focusing task, as some philosophers have questioned. This qu estion typically calls into discussion the philosophical zombie, a hypothetical “zombie” t hat is indistinguishable from a normal human being, but lacking in conscious experience, o r sentience. These hypothetical beings are capable (or, at least, appear to be) of the focusing task to which we ascribe consciousness, but by definition do not possess con sciousness. On the strong end, they are indistinguishable in any way from humans, but a re stipulated to be unconscious. Obviously, this kind of question poses problems for materialist accounts of 37

PAGE 41

consciousness, and Dennett's account is no exceptio n. Dennett argues that the very idea of (strong) philosophical zombies is incoherent, ba sed as it is on a notion of qualia (namely, the character of a mental state that makes a mental state have a “what it is like” status) which he dismisses. Dennett argues that it would be impossible for a being to lack the kinds of conscious experience we have without d emonstrating that lack behaviorally38. One possible answer to this conundrum of general in telligence is to bring in the idea of gainful employment of intelligence. This approaches the idea of a “doer” raised before, by asking if the intelligence is being employed gai nfully, and by whom. In the case of weak AI, the intelligence might be employed gainful ly, but certainly not by the machine itself – it is employed by various people to perfor m a task that they are worse at, for their benefit. We would hesitate at the very idea of doin g something for the benefit of the AI – is that even possible? In the case of animals to wh ich we ascribe some intelligence, there is a clear benefit to the employment of the intelli gence. Indeed, most tests of animal intelligence attempt to test the ability of the ani mal to perform a particular task in order to benefit themselves39. This seems like a problem that might be present eve n for a Strong AI – might we hesitate to ascribe a self-benefit to anything which is, in essence, completely programmed by38Daniel Dennett. Consciousness Explained The MIT Press, 1991, pp. 405-40639Almost every test of tool-use I have encountered ha s followed a formula like this. Recently, crows wer e tested by being given a stick, which was too short to directly acquire them food, but was long enough for them to get a longer stick, which they could th en use to get food. This kind of meta-tool use is o ften pointed at as a clear indication of intelligence, b ut the most important feature of it seems to be tha t it is intelligence gainfully employed rather than just a problem-solving ability in the abstract.38

PAGE 42

someone else (presumably for their benefit)? There are a number of possible ways we might approach that hesitation, including programmi ng in some kind of self-preservation system, which seems as if it would get over the ini tial hurdle, at least. This, however, might only ascribe gainful employment of intelligen ce to acts that were directly required for self-preservation, and might not cover acts for which the AI was programmed in the first place (the AI's “job”, as it were)40. It is questionable, also, whether one could accurately describe a strong AI as being “completel y programmed”. It is in some sense the task of strong AI research to program something that cannot, in fact, be completely programmed, anymore than we as humans can be comple tely programmed. Any strong AI would, by definition, be capable of the same kinds of self-modification we are capable of, for the same reasons we have.40This kind of issue obviously overlaps strongly with ethics in general, and especially ethical issues surrounding indoctrination and “brain washing” (eve n more especially in instances involving children).39

PAGE 43

Human Consciousness – the Multiple Drafts modelAs our exploration of the history of AI has made cl ear, AI researchers have focused most explicitly on intelligence, rather than consciousne ss – while we are concerned with consciousness more than intelligence. It is a stron g selling point of Dennett's model that it explains consciousness in terms of intelligence, ra ther than appealing to something else entirely.Dennett argues that our modern conception of consci ousness is influenced, still, by Cartesian dualism41, and argues that many of those who claim to be mat erialist still cling to the idea of a central point of consciousness, a Cartesian theater, which leads immediately to an infinite regress, because it must explain the consciousness that it locates in that central point – which it can only d o by positing another central point, and so forth. Instead, Dennett proposes that consciousn ess consists of “a bundle of semiindependent agencies”42 (a very similar argument to that put forth by Marv in Minsky in Society of Mind ), and that the operation of all these agencies tog ether results in the phenomenon we term consciousness. It is unclear whe ther, by this, Dennett means to conflate the idea of consciousness with the concept of a self – consciousness which, under the Multiple Drafts model, in some ways provi des the single unity that we speak of when we talk of a self, and keeps track of the self -narrative that Dennett views as41Dennett. Consciousness Explained The MIT Press, 1991, p. 10642Ibid. p. 26040

PAGE 44

particularly important. In any case, the two are mo re closely linked under the Multiple Drafts model than elsewhere. Conventional philosoph ical wisdom conflates the subjective aspect of consciousness with qualia – a conflation which Dennett resists. For Dennett, the literature on qualia has become so imp ossibly snarled that it would be better to simply start over43. While Dennett readily grants that there seems to be qualia, and a subjective aspect of consciousness, this is as far as he goes – under the Multiple Drafts model, the processes by which a machine and a human do things that conventionally would require the use of qualia are analogous. The subjective aspect (what it feels like to be conscious) arises from the larger context of the self-directed processes, rather than being an inherent aspect in a particular process (o r processes) or ingredient. The Multiple Drafts name arises from the way sensor y experience works in the model – there are a variety of sensory inputs from a given event, and those inputs can be interpreted in a variety of different ways. As the inputs are received by the various agencies that make up the brain, the interpretation s of those inputs are made available for the eliciting of behavior. The different interpreta tions constitute multiple drafts of the same real event, and the only way we can determine which one rose to the level of conscious experience is by the consequences it leav es behind – particularly behaviors elicited, and the memory of that particular draft. Clearly, this renders memory an extremely important part of consciousness in the Mu ltiple Drafts model, but it also gives Dennett his strongest argument against a “central p oint of consciousness” – the43Ibid. p. 36941

PAGE 45

Orwellian/Stalinesque revision argument.Dennett proposes an example44 in which our memory of a particular event has been altered (in his example, a memory of a particular w oman has been revised with the addition of glasses that were not there in reality) This alteration could have come about in one of two ways. The Orwellian revision is a rev ision after the fact of conscious experience – we consciously experience seeing a bar efaced woman walk by, and after that experience has been planted in our memory it i s altered to include the glasses. As long as that alteration occurs before the first att empt to access that memory, all our memories of that experience will include the glasse s. The alternative is the Stalinesque revision, one that takes place before the fact, as it were. Instead of an accurate experiencing of the event later altered, the Stalin esque revision renders our initial experience inaccurate – we consciously experience t hat the woman is wearing glasses from the very moment we see her, even though she is not. These revisions seem to be very different, and we could certainly make the dis tinction between them in the arena their names came from (disinformation campaigns), b ut Dennett argues we cannot meaningfully differentiate between the two revision s at the level of consciousness, regardless of the theory of consciousness we choose to use. The Stalinesque and Orwellian models of revision both account for all r elevant data, and are indistinguishable from each other even under models that include a Ca rtesian Theater. To make this argument, Dennett uses the color phi p henomenon – a phenomenon in44Ibid. pp. 116-12542

PAGE 46

which subjects are shown a red dot and then a green dot, with a time between in which no dot is present – but in which subjects report seeing a red dot that moves towards the eventual location of the green dot, and changes col or while moving45. This report is puzzling because it appears to show that the subjec t knows both the direction and eventual color of the second dot before they experi ence it. Dennett proposes both Orwellian and Stalinesque explanations for how this report could come to be believed by the subjects – to quote Dennett, “one [model] posit s a Stalinesque “filling in” on the upward, pre-experiential path, and the other posits an Orwellian “memory revision” on the downward, post-experiential path, and both of them are consistent with whatever the subject says or thinks he remembers”. Any theory of mind that had a finish line within the brain would, in theory, have a principled way of de termining which explanation was correct – one would have to determine what the cont ent of the experience was as it passed the finishing line, and thus became conscious. In p ractice, further experiments investigating the phenomenon render the explanation s under both models almost amusingly complex and outlandish. Dennett prefers t o dispense with the idea of a finish line of consciousness, and so he must dispense with any possibility of a principled discrimination between Orwellian and Stalinesque ex planations. But, if the person whose memory was in question could tell the difference be tween an Orwellian and Stalinesque revision, there would have to be a principled way o f distinguishing between the two that would be relevant to the question of what was consc ious to that person and thus a principled way of determining a finish line in cons ciousness. Neither the subject, nor outside observers, can distinguish between the two models – so, for Dennett, it is a45Ibid. p. 12043

PAGE 47

difference that makes no difference. This is difficult for one to accept, initially, and Dennett understands. In the arena that spawned Dennett's chosen terms (misinformation camp aigns), the difference is clearly a real and meaningful one, and there exist obvious pr incipled ways to distinguish between the two – but this is because of the time frames. W e can certainly tell in other instances when we are merely inferring, for example, motion, and when we have consciously experienced it. Dennett uses the example of seeing someone in two different places in two different flashes of lightning, and inferring t hat he or she moved in the intermediate time – but we certainly wouldn't claim to have perceived that motion. Dennett thinks that the reason we cannot make the distinction between O rwellian and Stalinesque revisions with regards to the color phi phenomenon is related to the timescale on which the important stuff takes place. In the lightning examp le, the flashes could be as much as 3 or 4 seconds apart – or as close as a second or so – a nd we would know we had merely inferred motion. When we attempt to make a distinct ion between Orwellian and Stalinesque models of consciousness-altering, howev er, we are dealing with timescales of 200 to 500 milliseconds. This causes a problem beca use, at such small timescales, “the brain” does not suffice as a finishing line – it is possible for different stimuli to beat each other to different points within the brain, so we a re forced to find a more fine-grained finish line, which does not, in fact, exist. Consider an example with a larger time frame. The s ame color phi phenomenon is 44

PAGE 48

displayed, but the subject of the experiment first reports two separate dots, and after an hour reports instead having seen a single moving do t. In a case like this, we could obviously determine whether an Orwellian revision t ook place. This time frame allows for extremely rough-grained differences – but on th e time frame Dennett is dealing with, we would need such a fine-grained finish line to di stinguish between Orwellian and Stalinesque revisions that any such finish line we posited would merely create another “difference that makes no difference”.This leads Dennett to determine which events became conscious not by positing a finish line but by a more functional definition – an event becomes conscious if it leaves behind consequences, elicits behaviors, and can therefore be remembered as being conscious. Events which never reach consciousness never leave behind these consequences, and so cannot be remembered or reported.While the memory is extremely important in the Mult iple Drafts model, Dennett does not treat it as gospel – his method of heterophenomenology explicitly seeks to treat the reports of experiment subjects as the data themselv es in explaining what is really going on, rather than treating the content of the reports as the data. This method is defended from charges of irrelevance by phenomena such as th e phi phenomena, where the reports of conscious experience are physically impossible, but are nonetheless consistent and strongly held accounts of what happened. 45

PAGE 49

As well as forming the foundation of the process by which events become conscious in the model, the memory also provides the location of the self – that unified entity to which we refer when we call someone by name, or refer to ourselves. Where the memory maintains a self-narrative as that particular consc iousness goes through life, Dennett sees a “center of narrative gravity” – an abstraction an alogous to the physical abstraction of a center of gravity – as what we mean by the self46. Dennett sees humans as needing, without necessarily understanding why, to represent themselves to the outside world (and, indeed, to themselves) – and to do this, we employ stories, and weave a self-narrative that tells both us and others who we really are. The cen ter of narrative gravity – the self – is an abstraction, not to be found at some particular location in the brain, but an enormously helpful one, aiding us with the tasks of both selfinterpretation and other-interpretation. One might ask, what need is there to self-interpret ? I already know who I am! But, on the contrary, Dennett would say we do not. Dennett sees people going through life in a way that is quite similar to the life of a fictional ch aracter – as we go through life, we are made more determinate in the same way a fictional c haracter is made more determinate when things are written about them. Of course, ther e are differences – we can think about the past and in some senses edit who we are by edit ing our memory of the past, an option not available to writers of fiction, but the simila rity is surprisingly strong. Positing of the “center of narrative gravity” (the self) is necessa ry both to explain and predict the behavior of others, and to explain and justify the behavior of ourselves. One of the questions the model raises is, if there is no center of consciousness except in46Ibid. p. 41846

PAGE 50

the abstract, who or what is directing the various agencies in the brain to do whatever it is they end up doing? If every agency is self-directed then aren't we merely escaping the question of how does consciousness work by shifting it one level “downwards” in the hierarchy?. If every agency is self-directed, we ne ed to explain that director somehow, which is precisely the task Dennett is undertaking in the first place. Instead, Dennett resists the question itself, positing instead that the director is an abstraction similar to, or perhaps even identical to, the self – not to be fou nd in reality, but useful nonetheless. Thus, Dennett sees a sort of structural and unified self-direction as a feature of the total system of semi-independent agents. Dennett explicit ly states that, while there is a virtual captain of consciousness, none of the “semi-independent ag encies” is ever elevated to a status that is actually above that of its companion s. When asked “who is in charge?” Dennett's answer is “first one coalition and then a nother, shifting in ways that are not chaotic thanks to good meta-habits that tend to ent rain coherent, purposeful sequences rather than an interminable helter-skelter power gr ab”47. This, Dennett terms the “Pandemonium model”, which is placed in opposition to the more traditional “Bureaucratic model” the model in which there is a central executive who makes all the decisions, and who answers the question “who is in charge?” much more neatly. To compare these models, Dennett turns to the still unsolved question of how utterances are made. The Bureaucratic model, complete with a “ Central Meaner” that directs the language production process in what to produce, imm ediately leans towards an infinite47Ibid. p. 22847

PAGE 51

regress – how does the Central Meaner give his inst ructions to the rest of the language production process? It must be in some kind of lang uage, and thus there must be some kind of language production process entirely contai ned in the Central Meaner already – and thus, a new Central Meaner. The alternative is something like what happens in computers today – carefully programmed selection of canned sentences, insertion of the correct values for the variables, and uttering the end-product – but even this has an outside programmer who basically writes the rules o f language for the process, and who merely moves the regress to another self. The Pande monium model is difficult for us to imagine – but this is to be expected, as it is a ca ricature as much as the Bureaucratic model is. The real model is likely to lie somewhere between the two models, although Dennett clearly believes it lies much closer to the Pandemonium model than the Bureaucratic. Dennett proposes that, rather than co mpletely random “why don't we say X” questions being judged by a battery of judges, s omewhat targeted “why don't we say X” questions arise from the semi-independent agenci es within the brain, as a result of their semi-independent curiosity about some feature of the world-data they have access to through our faculties of perception. This would res ult, to use a modified version of Dennett's example, in a process where, at first, a “why don't we say X” questioner being prodded into action by, perhaps, some other agent w hose responsibility it is to detect when interlocution is required. This first question er proposes some nonsensical gibberish, which is then modified into a gibberish sentence co mposed of real words by another agent, which is then modified further to a real sen tence by another agent, who is interested in a particular feature of the target of the sentence (in the example, the size of 48

PAGE 52

his feet), and so on and so forth until the panel o f judges deem it an acceptable sentence with regard to both the world-data in question and the communicative goal in question. This, of course, would be happening in multiple pla ces in the brain simultaneously, as various agencies within the brain noticed something about the sense data available to them and suggested possible interlocutions. At some point, one of these suggestions would (to continue the metaphor) become “popular” e nough within the agencies that it reached consciousness, and as a result was actually uttered. The Pandemonium model might include a function that looks suspiciously like the backpropagation algorithms that drove neural network re search into something of a frenzy – the utterance-generators propose possibilities, and the utterance-judges give some kind of feedback as to the suitability of the proposed utte rances, and the utterance-generators then respond to that feedback by altering their pro posed utterances in ways that were, in some sense, suggested. An artificial intelligence that operated on the Pan demonium model is an interesting idea, and brings to the forefront an important feature of the human brain that is rather important for the Pandemonium model, but is not oft en mentioned – that of plasticity. The difficulty we would face, today, if we tried to make a computer that operated on a Pandemonium model would be that we would be forced to set in stone some crucial features about the utterance generators and the jud ges, or some crucial feature in general for another kind of program. This would rigidify th e system somewhat, and would make 49

PAGE 53

it rather predictable after enough observation – th e system would be unable to change the features that were programmed in, even if it “knew” that those features were the ones that needed changing, while humans clearly do learn. Giv en the plasticity of neurons in the brain, however, that is not necessarily what happen s in a real Pandemonium. Indeed, the process of learning to speak, and the much longer p rocess of becoming good at it, suggests that new utterance-generators and judges a re being created (or, at the very least, old ones are being modified) over time, as, at some level, we judge the judges, as it were. Plasticity is certainly possible to code into any a rtificial system, although coding it for a system that was supposed to be generally intelligen t (as opposed to intelligent in a particular area) would be a fairly difficult propos ition even in theory – a better solution would be to, in some way, code a system so as to ma ke it able to code new plasticity into itself (although this, too, is more difficult than it sounds). Plasticity in artificial systems, as it is now, will always be limited by the imagina tion/forethought of the programmers48. It is, of course, possible that this is true of pla sticity in real neural networks as well – we do not yet understand neurons sufficiently well to determine where the limits to their plasticity lie. The point is that, at this point in time, plasticity is always limited by how much the programmer is willing to code. Moreover, s ome kinds of plasticity demonstrated in the brain would be impossible to co de into anything but a neural network (specifically, the plasticity displayed when the br ain, in response to damage, maps functions that were located in the damaged area to other areas of the brain, which were48This is not as damning a limitation as it might sou nd, as in some sense the plasticity of the brain is also limited by its “programmer” certain brain injurie s are never recovered from fully, which indicates some kind of limitation on what, exactly, is capabl e of changing in the brain.50

PAGE 54

not “intended” to perform those functions). The Pandemonium model brings to mind the oft-encoun tered description of groups of people as having a “group consciousness” – even in the absence of a clear leader. For instance, one might say that the “group consciousne ss” of the United States includes a degree of “pull yourself up by your bootstraps” har diness, strong independence, a healthy (or perhaps not) dose of hubris, and so forth. Whil e the United States has a clear leader (the Bureaucratic Executive) in the President, he h as certainly never dictated that every American must be fiercely independent, or prideful, or what have you. The group consciousness arises as a result of certain ideas ( or, perhaps, memes) becoming famous in the United States in a way that leaves a lasting im pression – which then allows for a center of narrative gravity to appear for the Unite d States, and thus, for the United States to have a self of sorts. One of the questions this analogy raises is whether or not the analogy to group consciousness is a condition of co nsciousness itself – does true consciousness require a gaggle of semi-conscious, s emi-independent agents? Could there be any true consciousness that was not in some sens e analogous to group consciousness? Dennett does not foray into this argument, but I am inclined to say that there could not be a a consciousness along the Multiple Drafts model t hat did not have strong similarities with “collective consciousness”, as it is termed in psychology. One of the important features of collective consciousness as it is defin ed in sociology and psychology is that it arises from “shared beliefs and moral attitudes” wh ich operate “as a unifying force in society” and this gives us a possible answer to t he persistent question of how to avoid 51

PAGE 55

what Dennett called “an interminable helter-skelter power grab”. Every semi-independent agent in a mind is, at the very least, concerned wi th its continued survival – certainly, if Dawkins is correct, that concern is ingrained at th e cellular level – and this shared concern would select away from “meta-habits” that d id not tend away from the power grab, because a consciousness that was more a power -grab than a unified organism would be significantly slower to act and, probably, more likely to act in the wrong way. Dennett stresses over and over, in Consciousness Ex plained and elsewhere, that the crucial point of his MD model is that there is no medium of consciousness Mental contents achieve consciousness on the basis of achi eving “typical and “symptomatic” effects – on memory, on the control of behavior and so forth”49. When we identify a mental content as conscious, there is nothing about that content we can point to that denotes consciousness -it is its effect on the rest of the system within which it lives that gives it consciousness (and can take it away). Cons ciousness is thus a purely emergent result of the underlying organization and activity of the brain, which is indeed the only way it could be given Dennett's Orwellian/Stalinesq ue example proving the nonsensical nature of a “finish line” to consciousness.49 Philosophy and Phenomenological Research vol. LIII, 4, December 1993, p. 929. 52

PAGE 56

ConclusionThe question we set out to address is whether or no t a conscious Artificial Intelligence is possible, operating along the lines of Dennett's Mu ltiple Drafts model of consciousness. Dennett's model is quite well suited to the questio n of AI, as it explains consciousness with a great deal of reference to the faculties tha t comprise what we would call intelligence. I intend to answer this question in t wo parts. First, I will demonstrate that strong AI is in principle possible, and then proceed to discuss the current state of AI and its ability to progress towards a strong AI as woul d be suggested by the Multiple Drafts model.Is it, then, possible to create an artificial intel ligence that is conscious? The question can be answered at a number of levels – levels which di stinguish themselves on the basis of the meaning of “artificial”. I will attempt to argu e from the most trivial definition of “artificial” to the one that is most relevant to ou r discussion, in small steps – illustrating, hopefully, at each level that nothing important has changed that should give us pause in assigning consciousness to that hypothetical entity At the most trivial level, it is in principle obvio us that if we were to artificially grow a brain along the same process that it develops in na tural processes, it would be intelligent and conscious. Indeed, one might argue that this is an accurate description of in-vitro 53

PAGE 57

fertilization, but this is not really what we are a fter. A less trivial reading of artificial might suggest that we could (with sufficient processing power) simulate all the neurons of a brain, mirrori ng the architecture of a brain, starting from a point in tremendously early brain developmen t and “growing” the brain50, except in a purely artificial sense. This is both a trivia l example and a non-trivial one – if one is truly dualist, the conclusion is that this process would not, in fact, result in consciousness – but it is difficult to think of a functionalist p osition that would make the same claim. Even Searle, it seems, would agree that such a syst em would be conscious51 it is, essentially, a natural brain running on different h ardware. In all important respects, it is identical (if the process is properly carried out) to the end result of a piece-by-piece replacement of every single brain cell by an artifi cial cell that exactly replicates the function of its predecessor.It must be noted at this point that this is not as easy as it sounds – there are capacities that are arguably present at or before birth that still elude us. The most obvious example of such a capacity would be the ease with which childr en learn their first language. Chomsky in particular suggests that this is due at least in part to a prepared internal language framework that is merely filled by what ch ildren hear in their first years of life, but the programming of such a thing would be rather more complicated than that, and is50By which we mean allowing the simulated brain to de velop in a way analogous to the way a natural brain develops. It would, presumably, learn languag es and learn to generate utterances, and so forth.51This is, however, somewhat unclear. Searle has voic ed objection to the idea of strong AI on the ground s that consciousness is a physical property, and cann ot thus be “simulated” under any circumstances. Whether he would view a process like the one descri bed as a simulation is difficult to predict.54

PAGE 58

probably beyond our ability currently.A deceptively minor difference from the previous ex ample becomes decidedly less trivial, however – what if the artificial system were starte d from a point of some development? What if the artificial brain came into existence as a mirror of a fully developed, 20 year old brain? It is not nearly as clear that it would be conscious – and here the question turns rather more strongly on software concerns. This que stion itself can be taken in two importantly different ways – one assumes only that there exists sufficient processor power to do everything that the cells do, and the o ther assumes, on top of the power requirements, that our understanding of consciousne ss is such that we can effectively program it. The second assumption makes this, again a fairly trivial example – if we can program consciousness into a simulated brain, it is tautologically true that it is conscious. But if that assumption is not made, it seems that t here exists at least a thread of argument to suggest that consciousness is not guaranteed. Th e argument would effectively claim that the process of development is critical to the development of consciousness – certainly, this too is trivially true, as the consc iousness of a newborn child is different from that of a fully grown adult. This argument doe s not, however, rule out the possibility of an artificial consciousness being started at a c ertain point down the development timeline – it merely makes that prospect somewhat h arder. Such a consciousness would need, for instance, memories of its “life” from bir th to age 20, but this can be artificially reproduced. It is difficult to think of development al process that does not result in a state, at a point in time, which can be enumerated precise ly and thus reproduced – even if the 55

PAGE 59

state at that point in time had some sort of callba ck to a previous state it held52. Consciousness must be capable of continuing forward at any arbitrarily picked point in time, and it must be possible to describe that cons ciousness entirely with reference to something within that slice of time, at least for a functionalist. The argument that consciousness must develop, or grow, must either ce de the major point it makes (namely, that true development is absolutely necessary for c onsciousness to exist) or fall straight into dualism, which is already largely discounted.We must also, however, grant that the above takes a significant step away from the examples it follows in terms of difficulty. While i t is not a large step away from an artificial replacement of a brain, piece-by-piece, such a process would be simple in theory (as each piece would be extremely simple to program ). Starting after a significant amount of development time requires that a great deal of m emories are coded for, and inclinations, and so forth and so on. Our understan ding of such things is not yet at the point where such coding is within our grasp.Let us assume, therefore, that we can program consc iousness at any point along its development timeline. Given that there are no parti cularly convincing arguments against the possibility of this in principle and given that it has already been accomplished i n nature53, it is presumable that, at some point in the futur e, this will be the case. Given that52It is presumably true that one could 'compress' the data required to fully describe a given neuron-sta te, for instance. Such a compressed description could b e held in memory, and given an 'address' that could be called by another function. This would allow pas t neuron-states to influence current/future neuronstates without needing anything except that which w as present at the time directly prior to the latter neuron-state.53By which I mean that, for instance, my consciousnes s is capable of continuing on from one moment to56

PAGE 60

assumption, and assuming agreement with all of the prior examples, can one really disagree with the claim that, regardless of the arc hitecture on which the program existed, a properly coded program for consciousness would re sult in consciousness? Such a program would, by definition, code for all the comp onents of consciousness. It would, by definition, code for the function, and reproduce th e results, of, say, the “panel of judges” Dennett envisions as used to judge an utterance's s uitability to a given conversation. Given sufficient processing power, then, and given the assumption above, it seems one must grant the status of consciousness to such a pr ogram even if the architecture looked nothing like that of the human brain. Of course, on e might make the argument that programs capable of fully emulating consciousness c an only run on certain architectures – but this is an argument that must bring with it t he claim that consciousness entails more than what might loosely be called “computation”. Tu ring proved54 that any computer can simulate any other computer55 – the only result of doing such a simulation is lo ss of speed that roughly scales with the dissimilarity of the s imulator, and the simulated. In principle, then, it does not seem impossible to create strong AI56, given that certain limitations are surpassed, both on hardware and sof tware sides. Having proven the possibility of strong AI in principle, we can move on to the question of practice – butthe next without requiring (I hope) time travel to function.54This is strictly only true of computers with access to infinite storage space, but holds for the vast majority of computers and computer architectures. A von Neumann machine can perfectly simulate a neural network, with sufficient memory and time, an d vice-versa.55Raymond Kurzweil. The Singularity is Near Viking Press, 200556It remains, however, an open question as to whether or not such an AI would necessarily be similar to human consciousnesses, however. It is worth wonderi ng what would happen if we created an exact replica of a brain, but used, say, silicon and copp er instead of their biological counterparts. This w ould vastly increase the speed at which signals travel t hrough the “metal brain” and it is not clear how this would impact the functional result of consciousness Such a speed differential could well change what we consider 'familiar results' of consciousness, re sulting in fairly significant differences.57

PAGE 61

first, there remain possible objections as to the s trong AI in the most recent example's consciousness. I have established that an AI progra mmed to start, as it were, at the age of 20 – with pre-programmed memories and inclinations, likes and dislikes, beliefs and desires is not unconscious on the basis of being fu nctionally capable of everything required of consciousness, but it is not a trivial question to ask if the programmed nature of these features detracts from their “conscious re ality”. One way that might offer insight into this problem is to consider a hypothetical human, who has been brainwashed at the age of 20 to have c ertain memories and inclinations, likes and dislikes, and so on. It does not seem as if the mere fact of his or her brainwashing would cause us to retract from them th e label of consciousness – but it does seem as if something is importantly different, when compared to a human who endured no brainwashing, whose beliefs and desires are his own. This can to some extent be thought of as an extension of the idea of a “doer”, because in the case of brainwashing or programming it is easy to assume that the programme d content is programmed for purposes that further the programmers ends, rather than the programmee's ends. This is not necessarily the case, though, and even if it is the difference is not clearly a difference that makes a difference, as Dennett would say.When parents raise their children, at least in some cases, they make a concerted effort to identify the child's aptitudes, weaknesses, passion s, interests, and so forth57. In some cases the effort is highly structured, in others it is quite fluid – but in all cases the57Malcolm Gladwell. Outliers Little, Brown and Company, 2008, pp. 102-10558

PAGE 62

possibility exists that the parents make an error. Perhaps they latch onto an early expression of interest in sport X, instead of notic ing an expression of interest in musical instrument Y. This leads to the parents nudging the ir child to pursue X, rather than Y, all the while believing they are encouraging the “right ” interest over the “wrong” one. Over time, it is not absurd to suggest that the eff ect this might have is not dissimilar to brainwashing in results. A child who encounters thi s kind of nudging will continue to develop an interest in X, while neglecting Y, and i n the end may perhaps be an accomplished competitor in X. It is of course possi ble, however, that he or she was capable of being a world-class soloist on instrumen t Y instead – but because of an environment fostered by his or her parents, he or s he ends up something else entirely. Nobody would suggest that such a person was not con scious, or even that there was a difference between them and someone who had pursued interests with no nudging from parents – but how, in results, is this different fr om brainwashing? A set of likes and dislikes that is “wrong” is achieved, as a result o f outside influences58. Consciousness exists in a succession of moments – e verything necessary to operate consciously in one moment must be there in order fo r the operation to occur. Certainly, this is not to suggest that Dennett is wrong to tak e issue with thin timeframes – it is evidently impossible to make a principled distincti on between Orwellian and Stalinesque58Of course, the real conclusion this example support s is that the idea of “right” and “wrong” sets of likes/dislikes is itself questionable.59

PAGE 63

stories of how something becomes conscious. However consciousness exists as an emergent process – none of the processes from which it emerges are themselves necessarily resistant to precise measurement in tim e. At each moment, every sub-process in consciousness must have the input necessary to g enerate the output for the next moment – and thus everything that is necessary for consciousness to operate across time is present, must be present, at any given time. Eve n if consciousness made consistent and explicit use of long-term memory in its normal oper ations, those memories are present at the time of use. As such, they could be programmed if they were known. Likes must be known, interests must be expressed, and capabilitie s must be ready for deployment. Dennett brings in the idea of a “center of narrativ e gravity” precisely because there is no ongoing string of consciousness that can be located physically in the brain – it is merely a set of memories. All of this must exist as a whole – consciousness cannot travel back in time to get a memory, it has to call a memory that is stored in consciousness but is present at precisely the time it is required. As su ch, it is all abstractly reducible to a set of information, stored in any given way. Information i s programmable, and given Turing's proofs, the computer on which it is stored is in so me ways irrelevant. The example of the child who is nudged in the “wron g” way illustrates how meaningless the difference between programmed and natural devel opment is – they both must exist as a record that is present in full at all points of t ime, and thus they are both in principle exactly replicable. Perhaps there exist practical d ifficulties with producing an exact replication of a given natural development, but in principle there exists no reason to 60

PAGE 64

suppose that conscious experiences arising from pro grammed beliefs, desires, and so forth are any less conscious than those rising from naturally developed beliefs, desires, and so forth.It is in some ways, however, less interesting to as k how one might make an AI mirror a human than to ask how an AI might develop if allowe d to develop naturally, as free from programmed content as is possible. Granted, we are concerned here with the question of an AI developed along the lines of the Multiple Dra fts model of consciousness – so, presumably, a great deal would be similar in both c ases. The MD model has as its major strength the feature of explaining consciousness mo stly by reference to intelligence, which suggests that were we to create an AI that ha d all of the various sub-processes and subsystems that the MD model suggests consciousness has, it would develop naturally in much the same way that a human consciousness would. Differences would likely manifest as a result of hardware differences, diffe rences in the capabilities of the body in which the AI found itself, and perhaps some differe nces rooted in different methods used in the subsystems to accomplish the same ends.Having established the in principle possibility of a strong AI is consistent with the Multiple Drafts model, we can now turn to the quest ion of what, in practice, stands in the way of strong AI. Dennett, by virtue of not writing with specific regard to AI, forces a rather disjointed approach here – without specific examples of tasks a conscious AI would need to be capable of, we cannot undertake a particularly detailed examination of 61

PAGE 65

the current state of AI to achieve those tasks. Muc h of the thrust of the MD model could indeed be characterized as a shift away from a coll ection of large tasks (like “language”, “reasoning” or “grand thinking”) to much smaller ta sks, which can be accomplished easily by “stupid” means.The MD model suggests, however, that certain tasks are, if not necessary, at least rather important in human consciousness. The most obvious is the utterance-generation system, with its panel of judges, but other fairly obvious needs include systems to deal with sensory perception (whatever form it takes), and pe rhaps a “narrative generator”. An ability to robustly handle uncertainty also seems n ecessary, but much of that ability, I think, comes from the nature of the MD model itself rather than a particular agent or set of agencies within it.Dennett says relatively little about how his uttera nce generator, with its panel of judges, would work – but what he does say bears some limite d similarities to what is known as the blackboard architecture in AI. This architectur e is such that various processes can put their results on a “blackboard”, from which any oth er process can take input and do whatever it is that it does with it, and then place the new result on the blackboard, and so forth59. Dennett's pandemonium model of how utterances are shaped strongly suggests such a blackboard exists, to which all the various agents who can do something useful to utterances have access. The use of agents in this c ontext is in some senses synonymous with our previous talk of sub-processes – these age nts are semi-independent in59Ben Coppin. Artificial Intelligence Illuminated Jones and Bartlett, p. 46962

PAGE 66

consciousness, capable of doing their tasks without complete direction from another authority. The blackboard architecture also gives u s one possible way that the problem of judgment criteria might be solved – as this was a p roblem that faced AI as well. There does not seem to be a strong algorithm for finding the perfect utterance for a given situation, so the goal instead turns out to be to f ind something that is vaguely optimal for the situation, given the strengths and limitations of the utterer. Such optimums can often by found rather quickly by systems that, to oversim plify, measure how much “progress” was made in the last given timeframe, and when said progress falls below a threshold, the system takes whatever it has achieved as a vaguely optimal result. In the case of an utterance generator, then, we might finally utter a sentence when the only thing that happened to it recently was a modification in punct uation – but while it was still being modulated from an incoherent noise into a nonsensic al string of words, the amount of activity would be such that the utterance would not be considered ready yet. One issue with this idea is that the panel of judge s Dennett describes is not, in its entirety, a conscious process – indeed, the conscious part of it is quite small (the “real” utterance, and perhaps some highly evolved alternatives). What makes those final utterances conscious ones, as opposed to the early ones? It is in answering this question that the blackboard architecture fails us – its only possibl e answer is the evaluation criterion60, whatever it is, but that criterion is applied at ev ery point in the process and so presumably would make either everything conscious or nothing c onscious until after the speech act occurred (at which point, of course, it would be in memory and would thus in some sense60Such as the “less than X progress” criterion.63

PAGE 67

have been conscious by definition61). This is, of course, more obviously true of Denne tt's particular definition of consciousness than the mor e common one – as a speech act is a consequence of consciousness, everything which dire cted it is, in some sense, conscious by association. This is one of the first ways a pro blem of localized power manifests itself – the evaluation criterion holds too much power in the system, and thus operates in some sense as the Cartesian theater's observer, a system which is untenable as a theory of consciousness. Dennett touches broadly on his answe r to the question of when something becomes conscious with his metaphorical talk of cel ebrity – a “consciousness candidate” becomes conscious when it becomes famous enough wit hin the population of agents that comprises consciousness, at which point it (likely) enters the memory, which cements its place in consciousness. Because of what Dennett des cribes as, essentially, evolved good habits in consciousness, “good” candidates are more likely to become conscious than bad ones.Cognitive celebrity, as a model of how the unconsci ous becomes conscious, requires rather more than simply an evaluation function to d etermine when to stop. Consider a population of semi-intelligent agents, each of whic h is quite similar to the others in how it operates and what it operates on. Such a population would create “celebrities” of sorts, but those celebrities would be extremely predictabl e, as what one agent liked would be liked for meeting the same criteria that would guid e the rest of the agent population. For61It is perhaps not this simple. Dennett distinguishe s between stable and unstable networks of connectio ns in the brain – the one for remembering, say, how to do a math problem and the other for reacting to ne w information. It is not entirely clear if he thinks either of these immediately confers consciousness o nto its content, but certainly it is true that the stab le network is open to reporting, and is thus in some s ense conscious as it is a part of the conscious narrativ e we create.64

PAGE 68

cognitive celebrity to work as a way for unconsciou s things to become conscious, and to do it in a way that made sense, the cognitive commu nity of agents would have to be large and diverse, to ensure that the method was capable of bringing to consciousness things suited to all the possible situations where conscio usness generates a benefit (essentially, all of our lives).To some extent, the blackboard architecture is some what like the Multiple Drafts model as a whole – although the evaluation criteria used in any blackboard architecture holds too much localized power for it to be to Dennett's liking in a theory of consciousness. Most of the inner workings of consciousness could b e based on a blackboard-esque architecture, however. Dennett writes early on abou t the phi phenomena, as evidence for a multiple drafts model of consciousness – suggesti ng that he thinks a system that is on the surface similar to his utterance generating sys tem operates for generating what we think we saw. The blackboard architecture gives us a starting point, of sorts, in attempting to develop the kind of utterance-generating sub-sys tem that Dennett has in mind, but the problem of localized power in the evaluation functi on is the hardest problem, in general, that will face the design of any sub-system.In broad strokes, the blackboard architecture could be used in basically an identical fashion for sense perception, and this brings us to the question of how we might get a computer to have sensory perception in the first pl ace. 65

PAGE 69

When Dennett deals with this question he turns quit e rapidly to the example of Shakey – a television camera on wheels, effectively, with a “computer brain” capable of performing severely limited tasks, but tasks that w ere nevertheless representative of a great deal of progress in AI. The discussion is int eresting, because Shakey never actually sees anything (or at least, we don't think it did) – it has fairly simple systems whereby it analyzes vertices to identify boxes or pyramids or other shapes (Dennett calls them “line semantics”).The question of how Shakey identifies boxes and pyramids is itself mis leading, Dennett suggests. Where we to ask Shakey the same question, Dennett argues that there are at least 3 answers that are all in some way right – al l of which focus on entirely different “hows”62. Whether or not Shakey's “method of vision” correl ated strongly or at all with the way humans see (which is an open and interestin g question in itself) is not really important – as long as, functionally, Shakey can do with vision what we can, his answer might as well be “X looks like a box, and Y looks l ike a pyramid”. We could certainly answer the same question that way, although we coul d of course talk about vertices and lines as well. Creating functionally identical syst ems is much easier than creating internally identical systems – Dennett sketches an oversimplified version of Shakey's “method of vision” that relies on locating vertices in sequences of 1's and 0's. From a computer science perspective, the question of how t o program a “method of vision” is interesting, but rather too complex to answer here – and it is likely the same is true for all of our senses. All of the senses, at an abstract le vel, are merely information gathering62Dennett, D. Consciousness Explained Black Bay Books, 1991, pp. 92-9466

PAGE 70

tools. Information can be represented in a myriad n umber of ways (binary, for example, or hexadecimal, or in written English, and so forth ) – but as long as the system which makes use of the information can make use of that information, the form in which it is represented is irrelevant. Whether Shakey distingui shes boxes from pyramids by use of an algorithm that locates vertices in binary code, or by some other method, the fact is that it distinguishes boxes from cones by use of sensory data. The final uncontroversial “need” in a conscious AI seems to be a system that would generate the self-narrative that Dennett essentiall y equates with the “self”. Again, Dennett is quite light on the detail as to how he thinks we generate our narratives – indeed, he goes as far as to claim that “Our tales are spun, b ut for the most part we don't spin them; they spin us”63. In some ways, Dennett could be read as suggesting that the “self-spinner” is an emergent feature of consciousness rather than a feature integral to consciousness, as he seems to argue that it is a matter of biological necessity that any being can discriminate between “itself” and others. Dennett s ays almost nothing more that suggests how a narrative might be generated beyond that it i s generated in the ways we generate them, which at least suggests that the generation m ust be unconscious (at least in part), and from the way humans generate their narratives i t does seem more like an emergent phenomenon than an internal one.This makes it similar in that respect to the remark able ability we demonstrate to operate under conditions of uncertainty. AI research has be gun to broach the topic of how to do63Ibid. p. 41867

PAGE 71

this, and some ideas have been advanced – the most familiar is probably “fuzzy logic”, which in simple terms attempts to derive “degrees o f truth” rather than a binary truth value. Most of the methods suggested for operation under uncertainty in AI are aimed quite clearly at weak AI – they are themselves stil l algorithmic processes with determinate failings and strengths, which does not seem to be the case for human operation under uncertainty. This suggests that our strength in operating under uncertainty rests with the architecture of our cons ciousness rather than a particular agent – and, indeed, it is easy to see how the Multiple D rafts model might result in such a result. The “draft generation” process will proceed regardless of uncertainty – this much is evident from phi phenomena. The process will generate a number of c ompeting drafts, all of which had to work with the same uncertainty, and then we choose the best of the competing draft. The differences between people's c hoices are likely to be found in different learned ways of determining the “best” dr aft for a given situation, rather than in a strictly followed heuristic.As Dennett stresses, the Multiple Drafts model make s the task of explaining consciousness a task of explaining the behaviors th at we call conscious, in unconscious terms. In a strict sense, Dennett truly is guilty o f “explaining away” consciousness – but, if one likes his argument, what has been explained away has no value. Thus Dennett's Shakey-SHRDLU hybrid is, in fact, a conscious machi ne in one area – it generates behavior that would normally be called conscious wh en demonstrated by humans. 68

PAGE 72

Much of the MD model turns on Dennett's rejection o f the concept of qualia (or, at least, his redefinition of it). When qualia are removed, m uch of the question “what is consciousness” is itself removed – the question bec omes a question of explaining how various acts that we take to be conscious acts are chosen, or how they occur. Speech acts are one such occurrence, and Dennett paints a plaus ible, if rather difficult to accept, picture of how utterances might be generated by pro cesses that are themselves not conscious.It bears mentioning, however, that an AI could orga nize its consciousness broadly in line with the MD model but be wildly different from a hu man intelligence even so – the MD model focuses mostly on things becoming conscious o n the basis of fame, or cognitive celebrity, which could be fairly easily programmed (although perhaps not perfectly), but it says relatively little about what processes need be present, or how they work. To take an obvious example, an AI might have no analog for visual perception, or indeed perception analogous to any of the human senses, bu t could still organize based on cognitive celebrity. The output, too, could be comp letely different – with such a limited understanding of our own consciousness, we cannot h onestly claim that the overarching architecture is the reason we arrive at the conclus ions we do, or how we do64. A computer consciousness might well be just what it is imagine d as in its worst science fiction representations – a cold, unfeeling, rational monst er. We simply do not know, at this point. This motivated some of Joseph Weizenbaum's c ritiques of AI – the possibility for64Or, at least, we cannot prove this. Much of Dennett 's point is that this claim is perfectly supportabl e, even if we cannot prove it to our satisfaction with out simply doing it.69

PAGE 73

an AI to be conscious, yet to be hugely different t o a human consciousness. In the final chapter of Computer Power and Human Reason Weizenbaum makes the argument that, as computers have progressed (possib ly because of this progress), humans have begun to lament that their lives turn them mor e and more into robots, how they cease to make meaningful choices65. Within this dogmatic polemic, however, Weizenbaum undermines his own argument – nothing he puts forward as a “choice” (as opposed to a “decision”, which is algorithmic) is s omething that cannot, in principle, be done by an artificial consciousness. The final line of his book asks what we would even mean when we talked (to paraphrase) of choices for machines – ignoring that the characterization as machines is precisely what woul d disappear were AI to achieve success.It bears asking as well what the distribution of su b-processes must look like for consciousness to emerge from their operation. One c ould imagine a group of subprocesses that all do the same thing (for a trivial example), in which the idea of celebrity makes little sense – what one sub-process liked wou ld be what all of them liked. Less diverse groupings of sub-processes would naturally be capable of a less diverse group of tasks, and would be incapable of making any kind of sense of consciousness-candidates that did not reflect something about which the subprocesses “cared” – and thus, we might be justified in claiming that the consciousne ss that emerges from distributions of65Joseph Weizenbaum. Computer Power and Human Reason W.H. Freeman and Company, 1976, pp. 258-26070

PAGE 74

sub-processes will scale in complexity with the div ersity of that distribution in terms of capability.An importantly linked question is whether or not th ere is a clear borderline between a population of agents that is unconscious and one th at is conscious, or if certain agents are an absolute necessity for consciousness while other s aren't. If we could not generate utterances, but could do everything else, would we still be conscious? Does consciousness instead, perhaps, depend on the capac ities that might deserve the “meta-” prefix, in that they often require the projection o f consciousness onto others? Or is it merely an emergent phenomenon that looks a certain way because of the agentpopulation of which it is comprised, but one which emerges from any sufficiently complex such population? It is not at all clear wha t Dennett thinks the answer to this is, except for the suggestion that perhaps language use is a necessary condition. Having shown that an artificial intelligence along the lines of Dennett's Multiple Drafts model is in principle possible, and that, in practi ce, it is possible precisely because it is explained in terms that do not themselves regress t o “consciousness”, our conclusion must be that the limiting factors in the creation o f a strong AI are both on the “software” and “hardware” sides – we lack sufficient understan ding of the unconscious processes that result in consciousness to program them effect ively, such that they too would generate emergent consciousness, and we lack suffic iently advanced hardware architectures (and, for that matter, the raw proces sing power) so as to run such a 71

PAGE 75

massively parallel computer efficiently. That said, both of these limitations are limitations that cannot reasonably be expected to endure foreve r (or even for a particularly long period of time), and the root answer to the questio n of a conscious AI's possibility must be an emphatic “yes”. 72

PAGE 76

Bibliography/Works Cited1.Carter, M. Minds and Computers, Edinburg University Press, 2007 2.Churchland, P. Matter and Consciousness The MIT Press, 1988 3.Coppin, B. Artificial Intelligence Illuminated Jones and Bartlett, 2004 4.Dennett, D. & Hofstadter, D. The Mind's I Basic Books, 2000 5.Dennett, D. “Quining Qualia” in Consciousness in Modern Science Oxford University Press, 1988 6.Dennett, D. “Real Patterns” in Brainchildren, Ess ays on Designing Minds, The MIT Press and Penguin, 1998 7.Dennett, D. “The Message is: There is no Medium” in Philosophy and Phenomenological Research vol. LIII, 4, December 1993, p. 929 8.Dennett, D. “The Self as a Center of Narrative Gr avity” in Self and Consciousness: Multiple Perspectives Erlbaum, 1992 9.Dennett, D. “The Unimagined Preposterousness of Z ombies” in Brainchildren, Essays on Designing Minds The MIT Press and Penguin, 1998 10. Dennett, D. “True Believers: The Intentional St rategy and Why It Works” in The Intentional Stance The MIT Press, 1987 11.Dennett, D. Brainstorms The MIT Press, 1981 12.Dennett, D. Consciousness Explained Black Bay Books, 1991 13.Dennett, D. Darwin's Dangerous Idea Simon & Schuster, 1995 73

PAGE 77

14.Dennett, D. The Intentional Stance The MIT Press, 1987 15.Franklin, S. Artificial Minds The MIT Press, 2001 16.Gladwell, M. Outliers Little, Brown and Company, 2008 17.Haugeland, J. Mind Design II The MIT Press, 1997 18.Heil, J. Philosophy of Mind (2nd Edition), Routledge Contemporary Introductions to Philosophy, 2004 19.Hofstadter, D. I Am A Strange Loop Basic Books, 2007 20.Kurzweil, R. The Singularity is Near Viking Press, 2005 21.Minsky, M. The Society of Mind Simon & Schuster, 1985-86 22.Weizenbaum, J. Computer Power and Human Reason W.H. Freeman and Company, 1976 74


ERROR LOADING HTML FROM SOURCE (http://ncf.sobek.ufl.edu//design/skins/UFDC/html/footer_item.html)