









Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
artificial intelligence syllabus unit 1
Typology: Summaries
1 / 17
This page cannot be seen from the preview
Don't miss anything!
UNIT I: Introduction to Artificial Intelligence:
According to John McCarthy , the father of AI,
“The science and engineering of making intelligent machines,
especially intelligent computer programs”,
It is imparting intelligence to the machines so that
they can operate like human beings.
Some of the activities that the artificially intelligent machines are designed for are:
Speech recognition, Learning, Planning, Problem-solving
Greek myths of Hephaestus, the blacksmith who manufactured mechanical servants, and the bronze man Talos incorporate the idea of intelligent robots. Many other myths in antiquity involve human-like artifacts. Many mechanical toys and models were actually constructed, e.g., by Archytas of Tarentum, Hero, Daedalus and other real persons.
4th century B.C.: Aristotle invented syllogistic logic, the first formal deductive reasoning system.
13th century: Talking heads were said to have been created, Roger Bacon and Albert the Great reputedly among the owners. Ramon Lull, Spanish theologian, invented machines for discovering nonmathematical truths through combinatorics.
In 1206 A.D., : Al-Jazari, an Arab inventor, designed what is believed to be the first programmable humanoid robot, a boat carrying four mechanical musicians powered by water flow.
15th century: Invention of printing using moveable type. Gutenberg Bible printed (1456).
15th-16th century: Clocks, the first modern measuring machines, were first produced using lathes.
16th century: Clockmakers extended their craft to creating mechanical animals and other novelties. For example, see DaVinci's walking lion (1515). Rabbi Loew of Prague is said to have invented the Golem, a clay man brought to life (1580).
17th century: Early in the century, Descartes proposed that bodies of animals are nothing more than complex machines. Many other 17th century thinkers offered variations and elaborations of Cartesian mechanism. Pascal created the first mechanical digital calculating machine (1642). Thomas Hobbes published The Leviathan (1651), containing a mechanistic and combinatorial theory of thinking. Arithmetical machines devised by Sir Samuel Morland between 1662 and
18th century: The 18th century saw a profusion of mechanical toys, including the celebrated mechanical duck of Vaucanson and von Kempelen's phony mechanical chess player, The Turk (1769). Edgar Allen Poe wrote (in the Southern Literary Messenger, April 1836) that the Turk could not be a machine because, if it were, it would not lose.
19th century: Joseph-Marie Jacquard invented the Jacquard loom, the first programmable machine, with instructions on punched cards (1801). Luddites (by Marjie Bloy, PhD. Victorian Web) (led by Ned Ludd) destroyed machinery in England (1811-1816). See also What the Luddites Really Fought Against. By Richard Conniff, Smithsonian magazine (March 2011). Mary Shelley published the story of Frankenstein's monster (1818). The book Frankenstein, or the Modern Prometheus available from Project Gutenberg. Charles Babbage & Ada Byron (Lady Lovelace) designed a programmable mechanical calculating machines, the Analytical Engine (1832). A working model was built in 2002; a short video shows it working. George Boole developed a binary algebra representing (some) "laws of thought," published in The Laws of Thought (1854). Modern propositional logic developed by Gottlob Frege in his 1879 work Begriffsschrift and later clarified and expanded by Russell,Tarski, Godel, Church and others.
20th century - First Half: Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. Russell, Ludwig Wittgenstein, and Rudolf Carnap lead philosophy into logical analysis of knowledge. Torres y Quevedo built his chess machine 'Ajedrecista', using electromagnets under the board to play the endgame rook and king against the lone king, possibly the first computer game (1912). Karel Capek's play "R.U.R." (Rossum's
1964: Danny Bobrow's dissertation at MIT (tech.report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. Bert Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems
1965: J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. (See Carl Hewitt's downloadable PDF file Middle History of Logic Programming). Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English on any topic. It was a popular toy at AI centers on the ARPA-net when a version that "simulated" the dialogue of a psychotherapist was programmed.
1966: Ross Quillian (PhD dissertation, Carnegie Inst. of Technology; now CMU) demonstrated semantic nets. First Machine Intelligence workshop at Edinburgh - the first of an influential annual series organized by Donald Michie and others. Negative report on machine translation kills much work in Natural Language Processing (NLP) for many years.
1967: Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma (PDF file) program. First successful knowledge-based program in mathematics. Richard Greenblatt at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play.
Late 60s: Doug Engelbart invented the mouse at SRI.
1968: Marvin Minsky & Seymour Papert publish Perceptrons, demonstrating limits of simple neural nets.
1969: SRI robot, Shakey, demonstrated combining locomotion, perception and problem solving. Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. First International Joint Conference on Artificial Intelligence (IJCAI) held in Washington, D.C.
1970: Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer-aided instruction based on semantic nets as the representation of knowledge. Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks.
Early 70's: Jane Robinson & Don Walker established influential Natural Language Processing group at SRI.
1971: Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English.
1972: Prolog developed by Alain Colmerauer.
1973:The Assembly Robotics group at Edinburgh University builds Freddy, the Famous Scottish Robot, capable of using vision to locate and assemble models.
1974: Ted Shortliffe's PhD dissertation on MYCIN (Stanford) demonstrated the power of rule-based systems for knowledge representation and inference in the domain of medical diagnosis and therapy. Sometimes called the first expert system. Earl Sacerdoti developed one of the first planning programs, ABSTRIPS, and developed techniques of hierarchical planning.
1975: Marvin Minsky published his widely-read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal.
Mid 70's: Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in NLP. Alan Kay and Adele Goldberg (Xerox PARC) developed the Smalltalk language, establishing the power of object-oriented programming and of icon-oriented interfaces.
David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception.
1976: Doug Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely-guided search for interesting conjectures). Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford.
Late 70's: Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.
1978: Tom Mitchell, at Stanford, invented the concept of Version Spaces for describing the search space of a concept formation program. Herb Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented representation of knowledge can be used to plan gene-cloning experiments.
1979: Mycin program, initially written as Ted Shortliffe's Ph.D. dissertation at Stanford, was demonstrated to perform at the level of experts. Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. Drew McDermott & Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance.
1980's: Lisp Machines developed and marketed. First expert system shells and commercial applications.
1980: Lee Erman, Rick Hayes-Roth, Victor Lesser and Raj Reddy published the first description of the blackboard model, as the framework for the HEARSAY-II speech understanding system. First National Conference of the American Association of Artificial Intelligence (AAAI) held at Stanford.
1981: Danny Hillis designs the connection machine, a massively parallel architecture that brings new power to AI, and to computation in general. (Later founds Thinking Machines, Inc.)
1983: John Laird & Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on SOAR.
James Allen invents the Interval Calculus, the first widely used formalization of temporal events.
Mid 80's: Neural Networks become widely used with the Backpropagation algorithm (first described by Werbos in 1974).
1985: The autonomous drawing program, Aaron, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments).
1987: Marvin Minsky publishes The Society of Mind, a theoretical description of the mind as a collection of cooperating agents.
1989: Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network), which grew into the system that drove a car coast-to-coast under computer control for all but about 50 of the 2850 miles.
1990's: Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. Rod Brooks' COG Project at MIT, with numerous collaborators, makes significant progress in building a humanoid robot TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement learning is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. EQP theorem prover at Argonne National Labs proves the Robbins Conjecture in mathematics (October-November, 1996). The Deep Blue chess program beats the current world chess champion, Garry Kasparov, in a widely followed match and rematch (See Deep Blue Wins). (May 11th, 1997). NASA’s pathfinder mission made a successful landing and the first autonomous robotics system, Sojourner, was deployed on the surface of Mars. (July 4, 1997) First official Robo-Cup soccer match (1997) featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. Web crawlers and other AI-based information extraction programs become essential in widespread use of the world-wide-web. Demonstration of an Intelligent Room and Emotional Agents at MIT's AI Lab. Initiation of work on the Oxygen Architecture, which connects mobile and stationary computers in an adaptive network.
2000's
Interactive robot pets (a.k.a. "smart toys") become commercially available, realizing the vision of the 18th cen. novelty toy makers. Cynthia Breazeal at MIT publishes her dissertation on Sociable Machines, describing KISMET, a robot with a face that expresses emotions. Stanford's autonomous vehicle, Stanley, wins DARPA Grand Challenge race. (October 2005). (See In a Grueling Desert Race, a Winner, but Not a Driver. The Nomad robot explores remote regions of Antarctica looking for meteorite samples.
Views of AI fall into four categories: Thinking humanly, Thinking rationally, Acting humanly, Acting rationally.
Historically, all four approaches have been followed. As one might expect, a tension exists between approaches centered around humans and approaches centered around rationality. (We should point out that by distinguishing between human and rational behavior, we are not suggesting that humans are necessarily irrational'' in the sense of
emotionally unstable'' or ``insane.'' One merely need note that we often make mistakes; we are not all chess grandmasters even though we may know all the rules of chess; and unfortunately, not everyone gets an A on the exam. Some systematic errors in human reasoning are cataloged by Kahneman et al..) A human-centered approach must be an empirical science, involving hypothesis and experimental confirmation. A rationalist approach involves a combination of mathematics and engineering. People in each group sometimes cast aspersions on work done in the other groups, but the truth is that each direction has yielded valuable insights. Let us look at each in more detail.
Acting humanly: The Turing Test approach
The Turing Test, proposed by Alan Turing (Turing, 1950), was designed to provide a satisfactory operational definition of intelligence. Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool an interrogator. Roughly speaking, the test he proposed is that the computer should be interrogated by a human via a teletype, and passes the test if the interrogator cannot tell if there is a computer or a human at the other end. Chapter 26 discusses the details of the test, and whether or not a computer is really intelligent if it passes. For now, programming a computer to pass the test provides plenty to work on. The computer would need to possess the following capabilities:
natural language processing to enable it to communicate successfully in English (or some other human language); knowledge representation to store information provided before or during the interrogation; automated reasoning to use the stored information to answer questions and to draw new conclusions; machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Turing's test deliberately avoided direct physical interaction between the interrogator and the computer, because physical simulation of a person is unnecessary for intelligence. However, the so-called total Turing Test includes a video signal so that the interrogator can test the subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical objects ``through the hatch.'' To pass the total Turing Test, the computer will need computer vision to perceive objects, and robotics to move them about.
Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting like a human comes up primarily when AI programs have to interact with people, as when an expert system explains how it came to its diagnosis, or a natural language processing system has a dialogue with a user. These programs must behave according to certain normal conventions of human interaction in order to make themselves understood. The underlying representation and reasoning in such a system may or may not be based on a human model.
Thinking humanly: The cognitive modelling approach
If we are going to say that a given program thinks like a human, we must have some way of determining how humans think. We need to get inside the actual workings of human minds. There are two ways to do this: through introspection--trying to catch our own thoughts as they go by--or through psychological experiments. Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as
a computer program. If the program's input/output and timing behavior matches human behavior, that is evidence that some of the program's mechanisms may also be operating in humans. For example, Newell and Simon, who developed GPS, the ``General Problem Solver'' (Newell and Simon, 1961), were not content to have their program correctly solve problems. They were more concerned with comparing the trace of its reasoning steps to traces of human subjects solving the same problems. This is in contrast to other researchers of the same time (such as Wang (1960)), who were concerned with getting the right answers regardless of how humans might do it. The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to try to construct precise and testable theories of the workings of the human mind.
Thinking rationally: The laws of thought approach
The Greek philosopher Aristotle was one of the first to attempt to codify right thinking,'' that is, irrefutable reasoning processes. His famous syllogisms provided patterns for argument structures that always gave correct conclusions given correct premises. For example,
Socrates is a man; all men are mortal; therefore Socrates is mortal.'' These laws of thought were supposed to govern the operation of the mind, and initiated the field of logic.
The development of formal logic in the late nineteenth and early twentieth centuries, which we describe in more detail in Chapter 6, provided a precise notation for statements about all kinds of things in the world and the relations between them. (Contrast this with ordinary arithmetic notation, which provides mainly for equality and inequality statements about numbers.) By 1965, programs existed that could, given enough time and memory, take a description of a problem in logical notation and find the solution to the problem, if one exists. (If there is no solution, the program might never stop looking for it.) The so-called logicist tradition within artificial intelligence hopes to build on such programs to create intelligent systems.
There are two main obstacles to this approach. First, it is not easy to take informal knowledge and state it in the formal terms required by logical notation, particularly when the knowledge is less than 100% certain. Second, there is a big difference between being able to solve a problem ``in principle'' and doing so in practice. Even problems with just a few dozen facts can exhaust the computational resources of any computer unless it has some guidance as to which reasoning steps to try first. Although both of these obstacles apply to any attempt to build computational reasoning systems, they appeared first in the logicist tradition because the power of the representation and reasoning systems are well-defined and fairly well understood.
Acting rationally: The rational agent approach
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just something that perceives and acts. (This may be an unusual use of the word, but you will get used to it.) In this approach, AI is viewed as the study and construction of rational agents.
In the ``laws of thought'' approach to AI, the whole emphasis was on correct inferences. Making correct inferences is sometimes part of being a rational agent, because one way to act rationally is to reason logically to the conclusion that a given action will achieve one's goals, and then to act on that conclusion. On the other hand, correct inference is not all of rationality, because there are often situations where there is no provably correct thing to do, yet something must still be done. There are also ways of acting rationally that cannot be reasonably said to involve inference. For example, pulling one's hand off of a hot stove is a reflex action that is more successful than a slower action taken after careful deliberation.
All the ``cognitive skills'' needed for the Turing Test are there to allow rational actions. Thus, we need the ability to represent knowledge and reason with it because this enables us to reach good decisions in a wide variety of situations. We need to be able to generate comprehensible sentences in natural language because saying those sentences helps us get by in a complex society. We need learning not just for erudition, but because having a better idea of how the world works enables us to generate more effective strategies for dealing with it. We need visual perception not just because seeing is fun, but in order to get a better idea of what an action might achieve--for example, being able to see a tasty morsel helps one to move toward it.
The study of AI as rational agent design therefore has two advantages. First, it is more general than the ``laws of thought'' approach, because correct inference is only a useful mechanism for achieving rationality, and not a necessary one. Second, it is more amenable to scientific development than approaches based on human behavior or human thought, because the standard of rationality is clearly defined and completely general. Human behavior, on the other hand, is well-adapted for one specific environment and is the product, in part, of a complicated and largely unknown evolutionary process that still may be far from achieving perfection.
The main function of an intelligent machine is decision making. These machines require software that accepts the information as input, understands it, weighs various options, and comes to a conclusion. These machines are used to impart reasoning to the given situation. Such software provides explanations and advice to the users to make informed decisions.
Visual input is that form of information that is crucial and difficult to interpret. Hence a system integrated with Intelligence must read, understand, interpret and comprehend the visual inputs and make decisions based on this information.
Some examples of these applications are –
A drone, spying camera or a spying airplane takes photographs, videos, which are used to understand the map of the area or figure out spatial information. Clinical expert systems use cameras inside the body and are often used by the doctors to diagnose the patient. Use of computer software is used in Police investigations for facial recognition. This program can identify the face of the suspect having a record in the police system called with the portrait mode with the description the witness gives to the forensic artist.
Some systems imparted with AI are designed to make them capable of hearing the voice and comprehending the language in order to understand the meaning of the words. This comprehension is not only in terms of the words but also in terms of sentences, their meanings, and the tone while human talks in various languages to the system. The software is built to recognize different accents, dialects, slang words, background noise, changes in voice modulation, changes in the voice due to pain, cold, etc.
This kind of software is programmed so as to read the text. This text can be written using a pen or pencil on paper. The text can also be on a screen written by a mouse or using a stylus. It can read the text and recognize the shapes of the letters and numbers and then convert it into editable text that can be manipulated, changed and stored, thus, increasing the speed of the process.
Robots are machines that are programmed as the slaves built to perform the tasks commanded by a master. They are built with various sensors. These sensors read the physical data as input from the real world. This physical data is in the form of light, heat and temperature, movement and pressure, sound, obstruction, spatial coordinates and bump. They are installed with efficient processors, multiple sensors and huge storage memory. All this is installed to exhibit intelligence. Besides, they are capable of adapting to the changing environment and learning from their mistakes.
In 1958, a chess-playing program called NSS (named after its authors, Newell, Shaw, and Simon) was developed for the IBM 704 computer. This program viewed chess in terms of search and was developed in Information Processing Language (IPL), also developed by the authors of NSS. IPL was the first language to be developed for the purpose of creating AI applications.
IPL was quickly replaced by an even higher-level language that is still in use almost 60 years later: LISP. IPL’s esoteric syntax gave way to the simpler and more scalable LISP,
LISP —the LISt Processor—was created by John McCarthy in 1958. McCarthy’s goal after the Dartmouth Summer Research Project on AI in 1956 was to develop a language for AI work that was focused on the IBM 704 platform
A key language in this time was developed in France and called Prolog (Programming in Logic). This language implemented a subset of logic called Horn clauses and allowed information to be represented by facts and rules and to allow queries to be executed over these relations.
Prolog continues to find use in various areas and has many variants that incorporate features such as object orientation, the ability to compile to native machine code, and interfaces to popular languages (such as C).
One of the key applications of Prolog in this time frame was in the development of expert systems (also called production systems). These systems supported the codification of knowledge into facts, and then rules used to reason over this information.
Prolog and LISP weren’t the only languages used to develop production systems. In 1985, the C Language Integrated Production System (CLIPS) was developed and is the most widely used system to build expert systems. CLIPS operates on a knowledge system of rules and facts but is written in C and provides an interface to C extensions for performance
The explosion of LISP dialects resulted in a unification of LISP into a new language called Common LISP, which had commonality with the popular dialects of the time.
IBM returned to games later in this period, but this time less structured than chess. The IBM Watson® question-and-answer system (called DeepQA) was able to answer questions posed in natural language. The IBM Watson knowledge base was filled with 200 million pages of information, including the entire Wikipedia website. To parse the questions into a form that IBM Watson could understand , the IBM team
used Prolog to parse natural-language questions into new facts that could be used in the IBM Watson pipeline. In 2011, the system
competed in the game Jeopardy! and defeated former winners of the game.
There have been lots of applications and uses of artificial intelligence, with its growing demand in various fields in marketing, healthcare, banking, finance, etc.
AI has been involved and enhanced a deep interest in the field of marketing. Earlier we would choose an online store for choosing a product, but sometimes it went difficult to find the right. But now we can search for the right products with its relative products as a search engine just read our minds and gives us the most appropriate result. An example of it is Netflix, where we find the names of the exact movies, its related movie as searched as comedy-thriller or suspense. With its growing development and advancement, more real-time applications will be possible in the file of marketing.
Artificial Intelligence has applied its presence in banking sectors and has provided customer support through conversational Chatbot applications which provide the best solutions to its end customers. Moreover, AI in the banking sector rescues us from online and credit card frauds. For Example, India most trusted Bank HDFC has its Chatbot EVA which is fully AI-Based, and there has been the Latest IRCTC AI-Based app “Ask Disha”.
Applications of AI in finance result in predicting or forecasting the future trend by estimating the past data or information gathered. Here we can take an example of stock market prediction by using AI to increase profit margin.
AI has been a great change in the field of health care. To prevent a risk of heart stroke, an organization developed a clinical support system. Apart from those AI has been used in Radiology, Imaging, Disease Diagnosis, etc.
Virtual assistants who use AI technology helps to seek out most of the problems. Most of the home appliances, to are AI-based. Switching on fans to Led lights to use AI technology. A commonly used virtual assistant is Amazon Echo which is used to translate the human language. Google Duplex uses machine learning and NLP (Natural Language Processing) to perform tasks such as booking, manage a schedule, make a reservation, etc.
The Agent Function (f) maps from percept histories to actions. The Agent Program runs on the physical Architecture to produce function f.
Performance measure: Healthy patient, minimize costs, lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses, treatments, referrals), Alarms, Lights
Sensors: Keyboard (entry of symptoms, findings, patient's answers)
(a robot that picks up parts or tools and places them in a new location)
Performance measure: Percentage of parts in correct bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current state depending on percept history.
Updating the state requires the information about −
Goal Based Agents,
Goal-based approach is more flexible than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.
Goal − It is the description of desirable situations.
Utility Based Agents.
Goals are inadequate when −