Search This Blog

Friday, July 23, 2010

Robotics - What is Robotics?

Roboticists develop man-made mechanical devices that can move by themselves, whose motion must be modelled, planned, sensed, actuated and controlled, and whose motion behaviour can be influenced by “programming”. Robots are called “intelligent” if they succeed in moving in safe interaction with an unstructured environment, while autonomously achieving their specified tasks.

This definition implies that a device can only be called a “robot” if it contains a movable mechanism, influenced by sensing, planning, actuation and control components. It does not imply that a minimum number of these components must be implemented in software, or be changeable by the “consumer” who uses the device; for example, the motion behaviour can have been hard-wired into the device by the manufacturer.

So, the presented definition, as well as the rest of the material in this part of the WEBook, covers not just “pure” robotics or only “intelligent” robots, but rather the somewhat broader domain of robotics and automation. This includes “dumb” robots such as: metal and woodworking machines, “intelligent” washing machines, dish washers and pool cleaning robots, etc. These examples all have sensing, planning and control, but often not in individually separated components. For example, the sensing and planning behaviour of the pool cleaning robot have been integrated into the mechanical design of the device, by the intelligence of the human developer.

Robotics is, to a very large extent, all about system integration, achieving a task by an actuated mechanical device, via an “intelligent” integration of components, many of which it shares with other domains, such as systems and control, computer science, character animation, machine design, computer vision, artificial intelligence, cognitive science, biomechanics, etc. In addition, the boundaries of robotics cannot be clearly defined, since also its “core” ideas, concepts and algorithms are being applied in an ever increasing number of “external” applications, and, vice versa, core technology from other domains (vision, biology, cognitive science or biomechanics, for example) are becoming crucial components in more and more modern robotic systems.

This part of the WEBook makes an effort to define what exactly is that above-mentioned core material of the robotics domain, and to describe it in a consistent and motivated structure. Nevertheless, this chosen structure is only one of the many possible “views” that one can want to have on the robotics domain.

In the same vein, the above-mentioned “definition” of robotics is not meant to be definitive or final, and it is only used as a rough framework to structure the various chapters of the WEBook. (A later phase in the WEBook development will allow different “semantic views” on the WEBook material.)

Components of robotic systems

This figure depicts the components that are part of all robotic systems. The purpose of this Section is to describe the semantics of the terminology used to classify the chapters in the WEBook: “sensing”, “planning”, “modelling”, “control”, etc.

The real robot is some mechanical device (“mechanism”) that moves around in the environment, and, in doing so, physically interacts with this environment. This interaction involves the exchange of physical energy, in some form or another. Both the robot mechanism and the environment can be the “cause” of the physical interaction through “Actuation”, or experience the “effect” of the interaction, which can be measured through “Sensing”.


Robotics as an integrated system of control interacting with the

Robotics as an integrated system of control interacting with the physical world.

Sensing and actuation are the physical ports through which the “Controller” of the robot determines the interaction of its mechanical body with the physical world. As mentioned already before, the controller can, in one extreme, consist of software only, but in the other extreme everything can also be implemented in hardware.

Within the Controller component, several sub-activities are often identified:

Modelling. The input-output relationships of all control components can (but need not) be derived from information that is stored in a model. This model can have many forms: analytical formulas, empirical look-up tables, fuzzy rules, neural networks, etc.

The name “model” often gives rise to heated discussions among different research “schools”, and the WEBook is not interested in taking a stance in this debate: within the WEBook, “model” is to be understood with its minimal semantics: “any information that is used to determine or influence the input-output relationships of components in the Controller.”

The other components discussed below can all have models inside. A “System model” can be used to tie multiple components together, but it is clear that not all robots use a System model. The “Sensing model” and “Actuation model” contain the information with which to transform raw physical data into task-dependent information for the controller, and vice versa.

Planning. This is the activity that predicts the outcome of potential actions, and selects the “best” one. Almost by definition, planning can only be done on the basis of some sort of model.

Regulation. This component processes the outputs of the sensing and planning components, to generate an actuation setpoint. Again, this regulation activity could or could not rely on some sort of (system) model.

The term “control” is often used instead of “regulation”, but it is impossible to clearly identify the domains that use one term or the other. The meaning used in the WEBook will be clear from the context.

Scales in robotic systems

The above-mentioned “components” description of a robotic system is to be complemented by a “scale” description, i.e., the following system scales have a large influence on the specific content of the planning, sensing, modelling and control components at one particular scale, and hence also on the corresponding sections of the WEBook.

Mechanical scale. The physical volume of the robot determines to a large extent the limites of what can be done with it. Roughly speaking, a large-scale robot (such as an autonomous container crane or a space shuttle) has different capabilities and control problems than amacro robot (such as an industrial robot arm), a desktop robot (such as those “sumo” robots popular with hobbyists), or milli micro or nano robots.
Spatial scale. There are large differences between robots that act in 1D, 2D, 3D, or 6D (three positions and three orientations).

Time scale. There are large differences between robots that must react within hours, seconds, milliseconds, or microseconds.

Power density scale. A robot must be actuated in order to move, but actuators need space as well as energy, so the ratio between both determines some capabilities of the robot.

System complexity scale. The complexity of a robot system increases with the number of interactions between independent sub-systems, and the control components must adapt to this complexity.

Computational complexity scale. Robot controllers are inevitably running on real-world computing hardware, so they are constrained by the available number of computations, the available communication bandwidth, and the available memory storage.

Obviously, these scale parameters never apply completely independently to the same system. For example, a system that must react at microseconds time scale can not be of macro mechanical scale or involve a high number of communication interactions with subsystems.

Background sensitivity

Finally, no description of even scientific material is ever fully objective or context-free, in the sense that it is very difficult for contributors to the WEBook to “forget” their background when writing their contribution. In this respect, robotics has, roughly speaking, two faces: (i) the mathematical and engineering face, which is quite “standardized” in the sense that a large consensus exists about the tools and theories to use (“systems theory”), and (ii) the AI face, which is rather poorly standardized, not because of a lack of interest or research efforts, but because of the inherent complexity of “intelligent behaviour.” The terminology and systems-thinking of both backgrounds are significantly different, hence the WEBook will accomodate sections on the same material but written from various perspectives. This is not a “bug”, but a“feature”: having the different views in the context of the same WEBook can only lead to a better mutual understanding and respect.

Research in engineering robotics follows the bottom-up approach: existing and working systems are extended and made more versatile. Research in artificial intelligence robotics is top-down: assuming that a set of low-level primitives is available, how could one apply them in order to increase the “intelligence” of a system. The border between both approaches shifts continuously, as more and more “intelligence” is cast into algorithmic, system-theoretic form. For example, the response of a robot to sensor input was considered “intelligent behaviour” in the late seventies and even early eighties. Hence, it belonged to A.I. Later it was shown that many sensor-based tasks such as surface following or visual tracking could be formulated as control problems with algorithmic solutions. From then on, they did not belong to A.I. any more.

Robotics - Robotics Technology

Most industrial robots have at least the following five parts:

This section discusses the basic technologies of a robot. Click one of the links above or use the navigation bar menu on the far right.

Robotics - Types of Robots

Ask a number of people to describe a robot and most of them will answer they look like a human. Interestingly a robot that looks like a human is probably the most difficult robot to make. Is is usually a waste of time and not the most sensible thing to model a robot after a human being. A robot needs to be above all functional and designed with qualities that suits its primary tasks. It depends on the task at hand whether the robot is big, small, able to move or nailed to the ground. Each and every task means different qualities, form and function, a robot needs to be designed with the task in mind.

Mobile Robots

Mars Explorer images and other space robot images courtesy of NASA.

Mobile robots are able to move, usually they perform task such as search areas. A prime example is the Mars Explorer, specifically designed to roam the mars surface.

Mobile robots are a great help to such collapsed building for survivors Mobile robots are used for task where people cannot go. Either because it is too dangerous of because people cannot reach the area that needs to be searched.

Mobile robots can be divided in two categories:

Rolling Robots: Rolling robots have wheels to move around. These are the type of robots that can quickly and easily search move around. However they are only useful in flat areas, rocky terrains give them a hard time. Flat terrains are their territory.

Walking Robots: Robots on legs are usually brought in when the terrain is rocky and difficult to enter with wheels. Robots have a hard time shifting balance and keep them from tumbling. That’s why most robots with have at least 4 of them, usually they have 6 legs or more. Even when they lift one or more legs they still keep their balance. Development of legged robots is often modeled after insects or crawfish..

Stationary Robots

Robots are not only used to explore areas or imitate a human being. Most robots perform repeating tasks without ever moving an inch. Most robots are ‘working’ in industry settings. Especially dull and repeating tasks are suitable for robots. A robot never grows tired, it will perform its duty day and night without ever complaining. In case the tasks at hand are done, the robots will be reprogrammed to perform other tasks..

Autonomous Robots

Autonomous robots are self supporting or in other words self contained. In a way they rely on their own ‘brains’.

Autonomous robots run a program that give them the opportunity to decide on the action to perform depending on their surroundings. At times these robots even learn new behavior. They start out with a short routine and adapt this routine to be more successful at the task they perform. The most successful routine will be repeated as such their behavior is shaped. Autonomous robots can learn to walk or avoid obstacles they find in their way. Think about a six legged robot, at first the legs move ad random, after a little while the robot adjust its program and performs a pattern which enables it to move in a direction.

Remote-control Robots

An autonomous robot is despite its autonomous not a very clever or intelligent unit. The memory and brain capacity is usually limited, an autonomous robot can be compared to an insect in that respect.

In case a robot needs to perform more complicated yet undetermined tasks an autonomous robot is not the right choice.

Complicated tasks are still best performed by human beings with real brainpower. A person can guide a robot by remote control. A person can perform difficult and usually dangerous tasks without being at the spot where the tasks are performed. To detonate a bomb it is safer to send the robot to the danger area.

Dante 2, a NASA robot designed to explore volcanoes via remote control.

Virtual Robots

BEAM Robots

BEAM is short for Biology, Electronics, Aesthetics and Mechanics. BEAM robots are made by hobbyists. BEAM robots can be simple and very suitable for starters.

Biology

Robots are often modeled after nature. A lot of BEAM robots look remarkably like insects. Insects are easy to build in mechanical form. Not just the mechanics are in inspiration also the limited behavior can easily be programmed in a limited amount of memory and processing power.

Electronics

Like all robots they also contain electronics. Without electronic circuits the engines cannot be controlled. Lots of Beam Robots also use solar power as their main source of energy.

Aesthetics

A BEAM Robot should look nice and attractive. BEAM robots have no printed circuits with some parts but an appealing and original appearance.

Mechanics

In contrast with expensive big robots BEAM robots are cheap, simple, built out of recycled material and running on solar energy.

Robotics - Robotics History

Definition of a 'Robot'

According to the Robot Institute of America (1979) a robot is:
"A reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks".

A more inspiring definition can be found in Webster. According to Webster a robot is:
"An automatic device that performs functions normally ascribed to humans or a machine in the form of a human."

First use of the word 'Robot'

Karel CapekThe acclaimed Czech playwright Karel Capek (1890-1938) made the first use of the word ‘robot’, from the Czech word for forced labor or serf. Capek was reportedly several times a candidate for the Nobel prize for his works and very influential and prolific as a writer and playwright.

The use of the word Robot was introduced into his play R.U.R. (Rossum's Universal Robots) which opened in Prague in January 1921.

In R.U.R., Capek poses a paradise, where the machines initially bring so many benefits but in the end bring an equal amount of blight in the form of unemployment and social unrest.

The play was an enormous success and productions soon opened throughout Europe and the U.S. R.U.R's theme, in part, was the dehumanization of man in a technological civilization.

You may find it surprising that the robots were not mechanical in nature but were created through chemical means. In fact, in an essay written in 1935, Capek strongly fought that this idea was at all possible and, writing in the third person, said:

"It is with horror, frankly, that he rejects all responsibility for the idea that metal contraptions could ever replace human beings, and that by means of wires they could awaken something like life, love, or rebellion. He would deem this dark prospect to be either an overestimation of machines, or a grave offence against life."
[The Author of Robots Defends Himself - Karl Capek, Lidove noviny, June 9, 1935, translation: Bean Comrada]

There is some evidence that the word robot was actually coined by Karl's brother Josef, a writer in his own right. In a short letter, Capek writes that he asked Josef what he should call the artificial workers in his new play.

Karel suggests Labori, which he thinks too 'bookish' and his brother mutters "then call them Robots" and turns back to his work, and so from a curt response we have the word robot.

First use of the word 'Robotics'

Three Laws of Robotics

Asimov also proposed his three "Laws of Robotics", and he later added a 'zeroth law'.

Law Zero: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
Law One: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law.
Law Two: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.
Law Three: A robot must protect its own existence as long as such protection does not conflict with a higher order law.

The First Robot: 'Unimate'

Unimate Puma500 ManipulatorAfter the technology explosion during World War II, in 1956, a historic meeting occurs between George C. Devol, a successful inventor and entrepreneur, and engineer Joseph F. Engelberger, over cocktails the two discuss the writings of Isaac Asimov.

Together they made a serious and commercially successful effort to develop a real, working robot. They persuaded Norman Schafler of Condec Corporation in Danbury that they had the basis of a commercial success.

Engelberger started a manufacturing company 'Unimation' which stood for universal automation and so the first commercial company to make robots was formed. Devol wrote the necessary patents. Their first robot nicknamed the 'Unimate'. As a result, Engelberger has been called the 'father of robotics.'

The first Unimate was installed at a General Motors plant to work with heated die-casting machines. In fact most Unimates were sold to extract die castings from die casting machines and to perform spot welding on auto bodies, both tasks being particularly hateful jobs for people.

Both applications were commercially successful, i.e., the robots worked reliably and saved money by replacing people. An industry was spawned and a variety of other tasks were also performed by robots, such as loading and unloading machine tools.

Ultimately Westinghouse acquired Unimation and the entrepreneurs' dream of wealth was achieved. Unimation is still in production today, with robots for sale.

The robot idea was hyped to the skies and became high fashion in the Boardroom. Presidents of large corporations bought them, for about $100,000 each, just to put into laboratories to "see what they could do;" in fact these sales constituted a large part of the robot market. Some companies even reduced their ROI (Return On Investment criteria for investment) for robots to encourage their use.

Modern Industrial Robots

The image of the "electronic brain" as the principal part of the robot was pervasive. Computer scientists were put in charge of robot departments of robot customers and of factories of robot makers. Many of these people knew little about machinery or manufacturing but assumed that they did.

(There is a common delusion of electrical engineers that mechanical phenomena are simple because they are visible. Variable friction, the effects of burrs, minimum and redundant constraints, nonlinearities, variations in work pieces, accommodation to hostile environments and hostile people, etc. are like the "Purloined Letter" in Poe's story, right in front of the eye, yet unseen.) They also had little training in the industrial engineer's realm of material handling, manufacturing processes, manufacturing economics and human behavior in factories.

As a result, many of the experimental tasks in those laboratories were made to fit their robot's capabilities but had little to do with the real tasks of the factory.

Modern industrial arms have increased in capability and performance through controller and language development, improved mechanisms, sensing, and drive systems. In the early to mid 80's the robot industry grew very fast primarily due to large investments by the automotive industry.

The quick leap into the factory of the future turned into a plunge when the integration and economic viability of these efforts proved disastrous. The robot industry has only recently recovered to mid-80's revenue levels.

In the meantime there has been an enormous shakeout in the robot industry. In the US, for example, only one US company, Adept, remains in the production industrial robot arm business. Most of the rest went under, consolidated, or were sold to European and Japanese companies.

In the research community the first automata were probably Grey Walter's machina (1940's) and the John's Hopkins beast. Teleoperated or remote controlled devices had been built even earlier with at least the first radio controlled vehicles built by Nikola Tesla in the 1890's.

Tesla is better known as the inventor of the induction motor, AC power transmission, and numerous other electrical devices. Tesla had also envisioned smart mechanisms that were as capable as humans.

An excellent biography of Tesla is Margaret Cheney's Tesla, Man Out of Time, Published by Prentice-Hall, c1981.

SRI's Shakey navigated highly structured indoor environments in the late 60's and Moravec's Stanford Cart was the first to attempt natural outdoor scenes in the late 70's.

From that time there has been a proliferation of work in autonomous driving machines that cruise at highway speeds and navigate outdoor terrains in commercial applications.

Fully functioning androids (robots that look like human beings) are many years away due to the many problems that must be solved. However, real, working, sophisticated robots are in use today and they are revolutionizing the workplace.

These robots do not resemble the romantic android concept of robots. They are industrial manipulators and are really computer controlled "arms and hands". Industrial robots are so different to the popular image that it would be easy for the average person not to recognize one.

Benefits

Robots offer specific benefits to workers, industries and countries. If introduced correctly, industrial robots can improve the quality of life by freeing workers from dirty, boring, dangerous and heavy labor. it is true that robots can cause unemployment by replacing human workers but robots also create jobs: robot technicians, salesmen, engineers, programmers and supervisors.

The benefits of robots to industry include improved management control and productivity and consistently high quality products. Industrial robots can work tirelessly night and day on an assembly line without an loss in performance.

Consequently, they can greatly reduce the costs of manufactured goods. As a result of these industrial benefits, countries that effectively use robots in their industries will have an economic advantage on world market.

Robotics - Current Research

The robots of tomorrow will be the direct result of the robotic research projects of today. The goals of most robotic research projects is the advancement of abilities in one or more of the following technological areas:

These technological advances will lead to improvements and innovations in the application of robotics to industry, medicine, the military, space exploration, underwater exploration, and personal service. The research projects listed below are only a few of many robotic research projects worldwide.

Artificial Intelligence

Human Behavior and Emotion


Cog, a humanoid robot from MIT.

Two of the many research projects of the MIT Artificial Intelligence department include an artificial humanoid called Cog, and his baby brother, Kismet. What the researchers learn while putting the robots together will be shared to speed up development.

Once finished, Cog will have everything except legs, whereas Kismet has only a 3·6-kilogram head that can display a wide variety of emotions. To do this Kismet has been given movable facial features that can express basic emotional states that resemble those of a human infant. Kismet can thus let its "parents" know whether it needs more or less stimulation--an interactive process that the researchers hope will produce an intelligent robot that has some basic "understanding" of the world.

This approach of creating AI by building on basic behaviors through interactive learning contrasts with older methods, in which a computer is loaded with lots of facts about the world in the hope that intelligence will eventually emerge.

Cog is 2 meters tall, complete with arms, hands and all three senses--including touch-sensitive skin. Its makers will eventually try to use the same sort of social interaction as Kismet to help Cog develop intelligence equivalent to that of a two-year-old child.

Kismet is an autonomous robot designed for social interactions with humans and is part of the larger Cog Project. This project focuses not on robot-robot interactions, but rather on the construction of robots that engage in meaningful social exchanges with humans. By doing so, it is possible to have a socially sophisticated human assist the robot in acquiring more sophisticated communication skills and helping it learn the meaning these acts have for others.

Kismet with its creator, Cybthia Breazeal
of MIT. Breazeal also helped create Cog.

Kismet has a repertoire of responses driven by emotive and behavioral systems. The hope is that Kismet will be able to build upon these basic responses after it is switched on or "born", learn all about the world and become intelligent.

Crucial to its drives are the behaviors that Kismet uses to keep its emotional balance. For example, when there are no visual cues to stimulate it, such as a face or toy, it will become increasingly sad and lonely and look for people to play with.

Any advances made with Kismet will be passed on to its big brother Cog, the robot brainchild of Rodney Brooks, head of MIT's AI department.

Hardware and Software Brains

In mimicking human intelligence, the goal is to make sure robots get a brain and reasoning. An important pioneer in the field of AI is Marvin Minsky.

Without a brain capable of processing input, a robot cannot react to its environment. A brain can be stimulated in hardware or software. Most robots at present have software brains, meaning a computer with a program running. These robots are connected to or equipped with a computer. A drawback is the limited number of processes that can be run on today's computers and the single purpose programs running on these computers. The programs cannot change themselves. In other words, learning is not possible.

A brain made out of hardware, or a number of processors will be closer to reality. The brain will consists of several chips that act both independently and as a group. The general belief is that the real brain works as a neural network of lots of independent processing units. Every chip in itself has a small program. It will process information but also pass it on to other chips. The program changes on a continuous basis. The network of chips is quick and will adapt, so in contrast with the software brain, it will learn.

An example of a hardware brain is Robokoneko the robocat from Genobyte. It has a brain from a machine, the CAM-machine.



Effectors and Mobility

Autonomous Flying Vehicle Project

Robot helicopter research began at the University of Southern California in 1991 with the formation of the Autonomous Flying Vehicle Project and continues to the present day. The first robot built was the AFV (Autonomous Flying Vehicle). The AVATAR (Autonomous Vehicle Aerial Tracking And Retrieval), was created in 1994. The current robot, the second generation AVATAR (Autonomous Vehicle Aerial Tracking And Reconnaissance), was developed in 1997. The ``R'' in AVATAR changed to reflect a change in robot capabilities.

Fish Robot

Without question, the fish is the best swimmer in the world. That is why the Ship Research Institute of Japan decided to build the Fish Robot. This project hopes to apply what is learned while building and researching with the Fish Robot to the design and construction of ships.

Muscles

Robots use electro-engines for movement. Engine parts are relatively cheap and last long. Engines are applied to move arm, turn wheels or move other parts, for instance camera's. Engines are less usefull with walking robots. In that particular case engines prove to be a weak part, a jumping robot is a mayor challenge to engine parts. Human being use muscle, which contract and expand, to move around. A muscle receive a signal form the brain and contracts. Causing a joint, like the knee to move.

Material to mimic a muscle is still a dream. Nitinol, an alloy that consist of the metals nickel and titanium will shrink if an electric current travels through the alloy, it will only contract 8% maximum.

The downside, nitinol is very expensive en the contraction is too little to allow it to be used to make walking robots. For the time being walking robots will not use muscles or engines but pneumatic of hydrolic technologies.


Robocup

To demonstrate advances in research and to stimulate scientist to share progress the Robocup competition is organized a few times a year. Robocup is a competition of Robot soccer teams. Movement, pattern recognition, where's the ball, where's the goal, who is in my team, all this and more is needed to score a goal. A simple games becomes a challenge for a robot team. Besides moving and finding the ball and team members the robots needs to define a strategy and take lots of decisions in a short time frame. Robocup has produced many advancements in both robotic effectors and sensors. Who could have imagined that soccer would contribute to robot research where robots eventually will be smart and capable of cooperation with other to reach a goal?

Sensor Detection

Robotic Vision

Machine vision involves devices which sense images and processes which interpret the images. Thomas Braunl of The University of Western Australia provides an excellent example of robotic vision research in both hardware and software.

EyeBot is a controller for mobile robots with wheels, walking robots or flying robots. It consists of a powerful 32-Bit microcontroller board with a graphics display and a digital grayscale or color camera. The camera is directly connected to the robot board (no frame grabber). This allows programmers to write powerful robot control programs without a big and heavy computer system and without having to sacrifice vision.

  • Ideal basis for programming of real time image processing

  • Integrated digital camera (grayscale or color)

  • Large graphics display (LCD)

  • Can be extended with own mechanics and sensors to full mobile robot

  • Programmed from IBM-PC or Unix workstation,
    programs are downloaded via serial line (RS-232) into RAM or Flash-ROM

  • Programming in C or assembly language

Improv is a tool for basic real time image processing with low resolution, e.g. suitable for mobile robots. It has been developed for PCs with Linux operating system. Improv works with a number of inexpensive low-resolution digital cameras (no framegrabber required), and is available from Joker Robotics. Improv displays the live camera image in the first window, while subsequent image operations can be applied to this image in five more windows. For each sub-window, a sequence of image processing routines may be specified.

Sensor Based Motion Planning

Sensor Based Planning incorporates sensor information, reflecting the current state of the environment, into a robot's planning process, as opposed to classical planning , where full knowledge of the world's geometry is assumed to be known prior to the planning event. Sensor based planning is important because: (1) the robot often has no a priori knowledge of the world; (2) the robot may have only a coarse knowledge of the world because of limited memory; (3) the world model is bound to contain inaccuracies which can be overcome with sensor based planning strategies; and (4) the world is subject to unexpected occurrences or rapidly changing situations.

There already exists a large number of classical path planning methods. However, many of these techniques are not amenable to sensor based interpretation. It is not possible to simply add a step to acquire sensory information, and then construct a plan from the acquired model using a classical technique, since the robot needs a path planning strategy in the first place to acquire the world model.

The first principal problem in sensor based motion planning is the find-goal problem. In this problem, the robot seeks to use its on-board sensors to find a collision free path from its current configuration to a goal configuration. In the first variation of the find goal problem, which we term the absolute find-goal problem, the absolute coordinates of the goal configuration are assumed to be known. A second variation on this problem is described below.

The second principal problem in sensor based motion planning is sensor-based exploration, in which a robot is not directed to seek a particular goal in an unknown environment, but is instead directed to explore the apriori unknown environment in such a way as to see all potentially important features. The exploration problem can be motivated by the following application. Imagine that a robot is to explore the interior of a collapsed building, which has crumbled due to an earthquake, in order to search for human survivors. It is clearly impossible to have knowledge of the building's interior geometry prior to the exploration. Thus, the robot must be able to see, with its on-board sensors, all points in the building's interior while following its exploration path. In this way, no potential survivors will be missed by the exploring robot. Algorithms that solve the find-goal problem are not useful for exploration because the location of the ``goal'' (a human survivor in our example) is not known. A second variation on the find-goal problem that is motivated by this scenario and which is an intermediary between the find-goal and exploration problems is the recognizable find-goal problem. In this case, the absolute coordinates of the goal are not known, but it is assumed that the robot can recognize the goal if it becomes with in line of sight. The aim of the recognizable find-goal problem is to explore an unknown environment so as to find a recognizable goal. If the goal is reached before the entire environment is searched, then the search procedure is terminated.

Control Systems

Hierarchical Behavior Control

Development of a hierarchical behavior control scheme for rovers and mobile robots is currently underway. It attempts to model and control mobile systems using distinct rule-based controllers and decision-making subsystems that collectively represent a hierarchical decomposition of autonomous vehicle behavior. This research approach employs fuzzy logic, behavior control, and genetic programming as tools for developing autonomous robots. Complex, multi-variable fuzzy rule-based systems are developed in the framework of behavior-based control for autonomous navigation. Genetic programming methods are used to computationally evolve fuzzy coordination rules for low-level motion behaviors. In addition, embedded control applications are being developed for microrover navigation using conventional microprocessors and specialized fuzzy VLSI chips.

Nano Technology and Medical Applications

The movie Innerspace shows a miniature spaceship travelling through the artery system of a human. It is a nice illustration of the promise of Nano technology. Nano Technology is a technique where miniature robots go to places humans will never be able to travel. Nano technology is a new science where robotics play a mayor part. Questions that needs to be solved because of the very tiny mechinical parts: can a robot repair itself, how do you control a nano robot, how does a nano robot move. Will it be able to work autonomously. Will it be able so shift in shape. Is a nanan robot a mechanical device or is it more like a microprocessor. Once these questions are answered Nano technology will change medical science for ever. Surgery will be performed in lots of cases by one or more Nano robots that will travel inside the human body.

Intelligent Systems for Communication Networks

Third generation wireless networks like the Universal Mobile telecommunication System (UMTS) are being developed to support wide band services. A major scenario is to support such services of a user roaming between a cellular terrestrial network and a satellite Personal Communication Network (PCN) while maintaining the quality of service during the hand-over process. And requiring some degree of continuity of quality of service guarantees. This project will focus on developing new protocols which uses artificial intelligent systems to support such hand-over process. Further Seamless roaming and user tracking using intelligent systems will be investigated.

Active Vibration Control

In recent years, the reduction of undesirable vibrations in the dynamic systems such as airplanes, vehicles, tall buildings and off-shore structures has become a crucial issue due to the increased social awareness of comfort as well as the ever increasing heights of new inner city buildings. With the advent of new construction materials and new construction methods, the buildings and structures are becoming taller, and more flexible. With a good design and under normal loading conditions, the response of these structures to vibrations will remain in the safe and comfortable domain. However, there is no guarantee that in-service loads experienced by tall buildings and structures will always be in the allowed range. The undesirable vibration levels could be reached under large environmental loads such as winds and earthquakes, and could adversely affect human comfort and even structural safety. It is becoming critically important to suppress dynamic responses of tall buildings and structures due to the strong winds and earthquakes not only for their safety but also their serviceability. When tall buildings and structures are flexible, design performances may become impossible to achieve by conventional design practice. Hence, additional devices are installed in tall buildings and structures to compensate the dynamic responses caused by environmental loads. As a result, new concepts and methods of structural protection have been proposed.

Due to recent development of sensors and digital control techniques, active control methods of dynamic responses of tall buildings and structures have been developed, and some of them have been implemented to actual buildings. The precondition is however that the implementation is simple enough to be realtime. In engineering applications with rule based systems providing efficient results, the implementation is often easier than its complex conventional counterpart.

The active vibration control in the structural engineering has become known as an area of research in which the vibrations and motions of the tall buildings and structures can be controlled or modified by means of the actions of a control system through some external energy supply. Compared with the passive vibration control the active vibration control can more effectively keep the tall buildings and structures safe and comfortable under the various environmental loads such as strong winds or earthquake hazards. This implies that the active vibration control can be effective and adaptive over a much wider frequency range and also for transient vibration, which is the reason to attract interest of the researchers not only in structural engineering but also in control engineering. Among many methods, that have been proposed, are active mass drivers (AMDs), active tendon systems (ATS), and active variable stiffness systems (AVSs).

Hyper-Redundant Robotics Systems

Robot manipulators which have more than the minimum number of degrees-of-freedom are termed ``kinematically redundant,'' or simply ``redundant.'' Redundancy in manipulator design has been recognized as a means to improve manipulator performance in complex and unstructured environments. ``Hyper-redundant'' robots have a very large degree of kinematic redundancy, and are analogous in morphology and operation to snakes, elephant trunks, and tentacles. There are a number of very important applications where such robots would be advantageous.

While ``snake-like'' robots have been investigated for nearly 25 years, they have remained a laboratory curiousity. There are a number of reasons for this: (1) previous kinematic modeling techniques have not been particularly efficient or well suited to the needs of hyper-redundant robot task modeling; (2) the mechanical design and implementation of hyper-redundant robots has been perceived as unnecessarily complex; and (3) hyper-redundant robots are not anthropomorphic, and therefore pose interesting programming problems. Our research group has undertaken a broadly based program to overcome the obstacles to practical deployment of hyper-redundant robots.

Robotics - Future of Robotics

Brooks Forecasts Future of Robotic Technology

April 13, 2005

by Maya Rao

Sun Staff Writer

Artificial intelligence and robotics expert Rod Brooks forecasts major changes in the next 50 years. Much in the way that computers have revolutionized society, robots may take on an increasingly significant role in people's lives. As part of the Gerard Salton Lecture Series, Brooks delivered a talk yesterday entitled "Flesh and Machines: Robots and People" to discuss potential applications of intelligent robots.

Brooks, who directs the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, asserted that we have more in common with robots, and machines in general, than we think.

"Mankind has had a long history of retreat from 'special-ness,'" Brooks said.

Centuries ago, humans discovered that Earth was not, in fact, the center of the universe. Later, humans and animals were found to have common ancestors. DNA as the fundamental mechanism of life means that humans and yeast are somewhat similar.

"Over time men have become less special and more like technology," Brooks said. "We only have 25,000 genes -- even potatoes have more than that!"

Brooks showed videos of several robots designed in his lab. In one scene, Brooks's colleague Cynthia "plays" with a robot she designed, Kismet.

"We see her moving that eraser, then the robot moving it. They're taking turns." At least, Brooks added, that's what the average observer would think. "But when we thought about it, she was doing all the work. She was giving the robot motion cues. That set us off on reading literature on child development."

Like Cynthia, mothers give their infants motion cues. They engage in activities with their children that the children cannot do by themselves, but can be trained to do with their caregiver's help.

"What the robot sees drives what it does," Brooks said.

Inside these robots exists a three-dimensional space; the robot's emotions are a point in that space. The robot uses its emotional state to generate how it reacts to certain objects, and can display emotion through facial expressions.

In another experiment suggestive of robots' similarity to children in their earliest stages of development, the lab called in various people to speak with the robot.

"When a mother interacts with her child, she generates messages through her voice: praise, attention, prohibition,and soothing are the four basic messages," Brooks said.

In the video, when one woman said, "Good job, Kismet! Look at my smile!" in an encouraging voice, the robot smiled proudly. When another said, "No, no, that's not appropriate" in a disparaging tone, Kismet lowered his head, his large ears drooping.

Although robots like Kismet don't actually understand the meanings of words, they are able to vocally replicate phonemes. As people teach various words to Kismet and Cog, another of CSAIL's robots, the robots can repeat them and identify them with their corresponding objects.

Brooks acknowledges that the development of intelligent robots is still in beginning stages, although significant progress has been made in areas like navigation. However, he said, "I think beyond navigation, robots have new possibilities which will be important."

As the world's demographics shift in the next half century, robots can be useful in fields such as manufacturing, agriculture and elderly assistance. Brooks imagines being able to roboticize large agriculture machines for the maintenance of individual plants. Such robots could do menial and time-consuming tasks like pruning and picking.

"Europe and the U.S. import low-cost labor now ... But that labor may not be there in 50 years," Brooks said.

Second, robot arms could be used for fixed automation, which is particularly useful in manufacturing. Such robots would require the dexterity of a six-year-old, said Brooks. Third, he hoped that robots could be developed to provide in-home care to the elderly, who will soon comprise a much larger demographic in places like North America, Europe, Korea and Japan.

The future, however, holds many challenges to realizing certain robotic applications. "Will we accept robots?" Brooks asked the audience.

It may be hard, he explained, for humans to come to grips with machines that may equal or surpass their own capabilities. Few people want to admit that their emotions can exist within a machine.

"I'm not saying current robots have real emotions, but if they did, it would be hard for people to accept ... and there would certainly be legislation against it!" Brooks said as the audience laughed.

"I liked the lecture very much," said Hugo Fierro grad. "I already took some courses on robots, but never thought about the philosophical aspect of it. I liked his predictions, although they're very futuristic."

"It was a lot of fun. I heard some very interesting and provocative ideas," said Prof. Graeme Bailey, computer science.

And as for the possibility of Brooks' vision becoming reality someday? "I hope so," Bailey said. "If one was to answer no to that, we have a somewhat dismal future for ourselves."

Robotics - Interesting points about Robotics


Robotics Minicourse: Basics of Robotics

Welcome to this Open University robotics minicourse, which is aimed at anyone who has a general interest in robots and now wishes to learn more about robotics. We hope you'll find the course entertaining, informative and worthwhile!

The minicourse concludes with some multiple-choice questions.

Building robots involves the development of a wide range of skills, including creative thinking, design, mechanics, electronics and programming - all of which are highly valued in industry. Your interest in the subject could lead you into an exciting and fulfilling career at the cutting edge of technology!

You can follow up this minicourse by taking the Open University's short course T184 Robotics and the meaning of life: a practical guide to things that think which will further develop your knowledge and practical skills in robotics. There is a version of this course offered by some schools. To find out more about this, see the Open University Young Applicants in Schools Scheme.

What are robots?

Nowadays, the word robot is often applied to any device that works automatically or by remote control, especially a machine (automaton) that can be programmed to perform tasks normally done by people.

Before the 1960s, robot usually meant a manlike mechanical device (mechanical man orhumanoid) capable of performing human tasks or behaving in a human manner. Today robots come in all shapes and sizes, including small robots made of LEGO, and larger wheeled robots that play robot football with a full-size ball.

What many robots have in common is that they perform tasks that are too dull, dirty, delicate or dangerous for people. Usually, we also expect them to be autonomous, that is, to work using their own sensors and intelligence, without the constant need for a human to control them. Looked at this way, a radio controlled aeroplane is not a robot, nor are the radio controlled combat robots that appear on television. However, there is no clear dividing line between fully autonomous robots and human-controlled machines. For example, the robots that perform space missions on planets like Mars may get instructions from humans on Earth, but since it can take about ten minutes for messages to get back and forth, the robot has to be autonomous during that time.

Where did the word robot originate?

The word robot was introduced in 1920 in a play by Karel Capek called R.U.R. , or Rossum's Universal Robots. Robot comes from the Czech word robota, meaning forced labour or drudgery. In the play, human-like mechanical creatures produced in Rossum's factory are docile slaves. Since they are just machines, the robots are badly treated by humans. One day a misguided scientist gives them emotions, and the robots revolt, kill nearly all humans and take over the world. However, because they are unable to reproduce themselves, the robots are doomed to die. However, the sole surviving human creates a male and a female robot to perpetuate their species.

Have people always been fascinated by human-like machines?

The roots of robotics can be traced back to Greek mythology and Jewish mysticism. Several myths from Ancient Greece tell of statues being brought to life. According to Aristotle, the legendary Greek inventor, Daedalus (whose son Icarus flew too close to the sun), created animated statues that guarded the entrance to the Labyrinth in Crete. The Jewish Talmud describes the making of a golem, a clay model brought to life by the chanting of magical combinations of letters from the Hebrew alphabet. A similar idea can be found in medieval alchemy, in which the philosopher's stone was believed to have a life-giving force. Mary Shelley drew upon such traditions in her 1818 novel Frankenstein.

What are the Laws of Robotics?

The term robotics was coined in the 1940s by science fiction writer Isaac Asimov. In a series of stories and novels, he imagined a world in which mechanical beings were mankind's devoted helpmates. They were constrained to obey what have become known as Asimov's Laws of Robotics:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s book of short stories, I, Robot, investigates the interplay between these laws. In one of the stories, there is a scandal because a candidate for mayor is suspected of being a robot – no-one has ever seen him eat, drink, or sleep.

However, the mayor claims he is not a robot, and the story has many twists and turns before we find out. Is it possible that we would allow robots to run our lives? After all, if they obey the First Law of Robotics, they will never harm us. In this respect they could be better than human politicians! This will not be a practical problem for many years, but who knows what the future holds?

What was the first practical robot?

A prototype industrial robot arm named Unimate (designed by George Devol and Joseph Engelberger) was sold to General Motors in 1959. It plucked hot automobile parts out of a die-casting machine and quenched them in water.

The 1960s and 1970s saw a revolution in manufacturing as robots replaced humans for many repetitive jobs. However, these robots were not intelligent by today’s standards. Usually they were programmed by humans training their movements, and they had very little decision-making capabilities. There are still many robots like this in factories today, but the trend is towards more intelligent general-purpose robots that can do more than just paint a panel or screw in a bolt.

Are space probes robots?

Space probes hurtling through the solar system may not seem like robots, but they fully merit that name by performing programmed tasks over long periods without direct human supervision. Operating in the vacuum of space and withstanding exposure to radiation and extremes of temperature, they explore places not yet accessible to humans.

On Christmas Day 2003, the Beagle 2 mission to Mars attempted to land on the Martian surface. Had the landing gone smoothly, the robotic shell would have opened and deployed solar panels to collect electricity, as well as a ‘PAW’ to collect rock and soil samples for a small analytic laboratory. The lander also had a ‘mole’ that would have burrowed into the surface to collect samples for analysis. Beagle 2 is shown on the right with its inventor, Professor Colin Pillinger of the Open University.

What can't robots do?

It is very difficult to give a robot the ability to perform a wide variety of tasks, move around in cluttered surroundings, recognise objects in the ‘real world’, understand normal speech, and think for itself. These are exciting areas of current research in robotics and artificial intelligence.

For example, the robot shown here has the problem of deciding where to cross the river. How can it make this decision? How would you do it? Perhaps you have come across a similar situation before. Perhaps you could look it up in a guide book. Perhaps you would reason that B is better than C because the water is likely to be shallower? Perhaps you would choose A, because you tried it before. All these ways of making decisions come very naturally to humans, but they are very difficult to program into robots.

Another great problem in robotics is getting them to understand language. This is very important in problem-solving. For example, the four cards below have a letter on one side and a number on the other. If a card has a vowel (a, e, i, o, u) on one side then it has an even number on the other. Which cards do you have to turn over to see if this is true? Think about your answer, then point to a card to turn it over.

letter E with number 6 on reverse

letter K with number 1 on back

number 4 with letter U on reverse

number 7 with letter A on reverse

Now consider the following cards where the rule is ‘every time I go to Paris I go by plane’. Which cards have to be turned over to test this? Again, think about your answer before turning the card over.

Paris card

Madrid card

Plane card

Train card

The answer to the first question is that you have to turn over the E to see if it has an even number on the back and you have to turn over the 7 to check that it does not have a vowel on the back. In an experiment, only 12% of people got this second part right (did you?).

The answer to the second question is much easier. Of course you have to turn over the Paris card to check that it has the word plane on the back, but now it’s much more obvious that you have to turn over the train card to make sure it does not have Paris on the back. In the experiment mentioned above, 60% of people got the second part right.

These problems are logically the same, so the experimenters drew the conclusion that the meaning of the symbols is an important part of problem solving. Since robots have very poor language capabilities, their ability to use this kind of reasoning is very limited.

Another of the great problems in robotics is getting them to ‘see’. Although it is easy to put a camera on a robot, it is much more difficult to get the robot to understand what is in an image. Most humans have miraculously good vision. We are able to resolve great ambiguity in scenes. It has proved much more difficult to get robots to understand what is in their universe, and machine vision remains one of the big unsolved problems in robotics research.

There are other problems in robotics that make progress slow. For example, your body is covered with skin, and this contains millions of sensors that allow you to do many fantastically precise things. For example, try typing at a computer with gloves on. The lack of touch feedback will make it very difficult. Also your muscles enable you to have very fine control. Even if you are rather clumsy, you are probably much better at manipulating objects than the average robot. Most people would not let a robot dust their favourite china.

Will robots ever be as good as humans?

Many futurists believe that robots will eventually and inevitably become more capable than humans, but some experts in artificial intelligence assert that machines will never be able to develop the consciousness and emotions needed for reasoning and creativity.

Nonetheless, there are already commercially available robots that can live in our houses and do basic chores for us. Robots are very good at processing certain kinds of information, and they are ideally suited to answering the telephone and being controlled over the Internet.

The International RoboCup Federation has set itself the challenge of having a team of humanoid robot football players beat the human world champions by 2050. Can you image that? It means that robots will have to become as nimble and skilful as Beckham. It will require the invention of many new materials – for example, a human soccer player could be badly hurt if it clashed with a robot made of metal. It will also require an enormous improvement in machine vision. If you play sports such as football, tennis, or even snooker, next time you play think about the huge amount of information that comes through your eyes.

What is the future for robots?

Robotic pets, lawn mowers and vacuum cleaners are already on the market. Following the success of their Aibo robot dog, Sony have developed a humanoid entertainment robot named QRIO. Honda's Asimo welcomes customers to their showrooms in Japan. Toshiba have built a robot that can play volleyball. Fujitsu's HOAP-2 can perform Japanese Sumo wrestling stances, as well as moves from the Chinese martial art taijiquan.

Rapid advances are being made in robotic control systems, artificial intelligence, neural networks, and in the miniaturisation, sophistication and reliability of electronic circuitry, sensors and actuators. These are all contributing to a steady increase in the capabilities of robots. Robots currently under development may become widely used in the food, clothing, nuclear and offshore industries, healthcare, farming, transportation, mining and defence.

Will robots take over from humans?

This is a popular science fiction theme, and the answer depends on whether robots will ever attain consciousness and emotions. In stories like 2001: A Space Odyssey and Terminator, humans always find a way to outwit intelligent machines that try to take over control. That's fiction, however, and fact is often stranger than fiction!

The suggestion that robots will take over because they might become more intelligent than humans overlooks one critical fact: the people who have power in human societies are usually not the most intelligent in the obvious, intellectual way. They have different kinds of ‘human intelligence’, including the ability to understand other people, and to influence their behaviour.

The sensible answer to the question as to whether robots will take over is that they probably won’t in the near future. There are many reasons for this. The first is that the robots of today have puny brains compared to humans, and they do not have the ability to organise in the same way as humans. Our societies are very complex and allow us to achieve many very advanced things. It is unlikely that robots could overtake us in the near future. Even so, it is something that we should keep an eye on, since all scientists have a responsibility not to do things that damage society.

However, for the most part, robots play a very positive role in our societies, and we can expect them to be used in many ways that make life better for us all.

If you would like to, why not try out the quiz associated with this minicourse? Go to quiz now.

Alternatively, if this minicourse has taken your interest, why not look at the introductory 10 point credit course, T184 - Robotics and the Meaning of Life