Let us shortly give a more clear sketch of what a human expects when purchasing a home robot,
(inspired by project CoSy, Deliverable 10.1 on simplest scenarios and their requirements, University of Birmingham).
The robot is delivered, unpacked, connected to a power source and the robot processor(s) start and are not necessarily ever switched off.
Connected to loading station, the robot will always re-find it or use other plugs.
The user will initiate contact with the robot, for example by pressing the touch screen.
The robot starts sensing and should detect the human and first features in the room.
At this stage it is expected that the robot has some knowledge about the typical
layout of an apartment or house, the typical types of rooms, and the typical items
of furniture in these rooms. Basic properties and functional constraints of these
items are also expected, e.g., that a door turns or maybe slides, chairs move
relatively easily, but refrigerators do not.
The robot then asks, if s/he (there could be a personalisation first) is shown the apartment.
The user annotates the first room, e.g., by pressing a room label on the touch screen,
and the user moves on to the second room, e.g., the living room.
Unnoticed by the user, the robot extracts the main room dimensions,
basic metric information such as room size, specific features or landmarks,
and annotates the room with name given. S/he then follows the human while mapping
the way into the next room. A slim corridor or door is detected, but not yet annotated.
In the second room the robot kindly asks for the room name.
The user might also point out some main items, e.g., the dinning
table or the big seating group and sofa, e.g., using a laser pointer.
Unnoticed by the user, the robot used the second room information to
annotate the door or slimmest passage in between to separate the rooms.
Even if not completely able to classify the sofa, its main characteristics such
as seat height, main dimensions, and the location in the room are stored.
If uncertain, s/he might ask the user to either point at the door
or again at the sofa for clarification.
This procedure is iterated with all rooms. Sooner or
later the robot will enter a room where it has been already. S/he might ask the user to verify this.
Learning the room layout and room details filled the
hierarchical cognitive map with topological, metric and the annotation information.
Loops have been closed, maybe even with verification. If a second entrance to a room
is detected, again the robot could ask for clarification if uncertain.
It is now time to execute a task, e.g., the user tests if the robot can go to the sofa.
The task might come from one of the four application domains, e.g., a commodity to fetch an object,
a security task checking all rooms, delivering food, or driving the wheelchair of the impaired
person to the tea table or vice versa.
If only one sofa has been pointed out, it most likely will go to the one in
the living room. If the robot has detected and classified a sofa in the sleeping room,
s/he might ask “the one in the dining room?”.
(Since vocabulary is limited the use of a user-independent speech interface might be feasible.)
Several days later. Children played extensively, even the sofa has moved.
The robot is again asked to go to the sofa. S/he safely avoids all clutter
and beloved toys, goes to the room, and uses the expectation where to look first
for the sofa. If not certified, s/he looks around. If not found at all, the robot
asks the user where it is, if still in the room the robot shall locate it and update the representation.
Again some days later. A new sofa has been added in the child room.
On a task it accidentally detects a new large structure or possibly classifies
the item as sofa. In both cases it reconfirms the assumptions by asking the user at appropriate time.
The scenario details shall outline how it is envisioned to show the robot around.
It shall also link the practical aspects with the required S&T developments.