autonomous rational agentsin many cases it is not


Autonomous Rational Agents:

In many cases, it is not accurate to talk about a particular program or a particular  robot, as the combination of and software and hardware in some intelligent systems is comparably    more complex. Instead, we will follow the lead of Norvig and Russell and describe AI from rational intelligent agents and autonomous paradigm. We're going to apply the definitions from chapter 2 of Norvig and Russell's textbook, starting with these two:

  • An agent is anything that may be visualized as perceiving its environment through sensors and acting upon that environment through effectors.
  • A rational agent is one that does the correct thing.

We see that the word 'agent' indicate humans (where the sensors are the effectors and the senses are the physical body parts) as well as robots (where the sensors are things like touch pads and cameras and the effectors are several motors) and computers (where the sensors are the mouse and keyboard and the effectors are the speakers and monitor).

To find out whether an agent has acted rationally, we need an objective measure of how successful it has been and we ought to worry about when to make an evaluation using this measure. When designing an agent, it is essential to think hard about how to evaluate it's performance, and this evaluation should be free from any internal measures that the agent undertakes (i.e. as part  of a  heuristic  search  -  see  the  next  lecture).  The performance should be checked in terms of how rationally the program acted, which depends both on how well it did at a particular task and  on what the agent experienced from its  all environment, what the agent knew about its  all environment and what actions the agent could really undertake.

 

 

 

Request for Solution File

Ask an Expert for Answer!!
Computer Engineering: autonomous rational agentsin many cases it is not
Reference No:- TGS0162242

Expected delivery within 24 Hours