WO2024090078A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2024090078A1
WO2024090078A1 PCT/JP2023/034036 JP2023034036W WO2024090078A1 WO 2024090078 A1 WO2024090078 A1 WO 2024090078A1 JP 2023034036 W JP2023034036 W JP 2023034036W WO 2024090078 A1 WO2024090078 A1 WO 2024090078A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
action
robot
information processing
information
Prior art date
Application number
PCT/JP2023/034036
Other languages
English (en)
Japanese (ja)
Inventor
イシュワルプラカシュ ヘッバル
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2024090078A1 publication Critical patent/WO2024090078A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/40Control within particular dimensions
    • G05D1/43Control of position or course in two dimensions

Definitions

  • the present invention relates to an information processing device, an information processing method, and a program.
  • this disclosure proposes an information processing device, information processing method, and program that enable good interaction with people.
  • an information processing device having an operation control unit that moves a robot in accordance with a user's action, and an information presentation unit that presents a guide video showing a recommended action procedure for the user within the visual field of the user performing the action from the moving robot.
  • an information processing method in which the information processing of the information processing device is executed by a computer, and a program that causes a computer to realize the information processing of the information processing device.
  • FIG. 1 is a diagram for explaining an outline of behavior support by a robot.
  • FIG. 2 is a block diagram showing an example of a system configuration of a robot.
  • FIG. 13 is a diagram showing an example in which a robot is applied to action support in a dance game.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 11 is a diagram showing a modified example of step SA13.
  • FIG. 13 is a diagram showing an example in which a robot is applied to behavior support during emergency evacuation.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 1 is a diagram illustrating an example in which a robot is applied to behavior support in factory work.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 11 is a diagram illustrating an example of a flow of information processing.
  • FIG. 1 is a diagram for explaining an outline of behavior support by a robot MB.
  • the robot MB is an autonomous mobile robot with the ability to follow the user US.
  • the robot MB follows the user US to support the user US's actions.
  • Action support is performed using a guide video GV.
  • the guide video GV is a video showing the recommended action steps for the user US.
  • Action steps are information that show what action should be taken, where, and at what timing.
  • the action steps include routine information regarding the content, method, location, and timing of the action to be taken.
  • action procedures include operation procedures for a immersive game, work procedures at a work site, evacuation procedures in an emergency, or first aid procedures for a person who suddenly falls ill.
  • the robot MB determines the movement route RTU of the user US and the movement route RTM of the robot MB based on a preset action procedure.
  • the robot MB estimates the visual field of the user US during the action.
  • the robot MB presents a guide image GV in the visual field of the user US while moving along the movement route RTM.
  • an image showing the dance steps of a dance game is shown as the guide image GV.
  • the guide video GV includes an explanatory video EP (see Figure 3) and a navigation video NV.
  • the explanatory video EP is a video that shows the content of the action that the user US should take or the method of action.
  • the navigation video NV is a video that indicates the location where the user US should take the action.
  • the explanatory video EP is displayed on a direct-view display 50 (see Figure 2) mounted on the robot MB.
  • the navigation video NV is projected from a projector 40 mounted on the robot MB around the movement route RTU of the user US.
  • the robot MB estimates the field of view of the user US during the action, and presents the guide video GV in the estimated field of view.
  • FIG. 2 is a block diagram showing an example of the system configuration of the robot MB.
  • the robot MB has an ECU (Electronic Control Unit) 11, a communication unit 12, a map information accumulation unit 13, a memory unit 14, an autonomous driving support unit 15, an HMI (Human Machine Interface) 16, a vehicle control unit 17, an external recognition sensor 30, a projector 40, and a display 50.
  • the ECU 11, the communication unit 12, the map information accumulation unit 13, the memory unit 14, the autonomous driving support unit 15, the HMI 16, and the vehicle control unit 17 function as an information processing device 10 that processes various information such as sensor information and map information.
  • the individual functional parts that make up the robot MB are connected to each other so that they can communicate with each other via a communication network 19.
  • the communication network 19 is composed of an in-vehicle communication network or bus that conforms to digital two-way communication standards such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), FlexRay (registered trademark), and Ethernet (registered trademark).
  • the communication network 19 may be used according to the type of data being transmitted.
  • CAN may be used for data related to vehicle control
  • Ethernet may be used for large-volume data.
  • each functional unit may be directly connected using wireless communication intended for relatively short-range communication, such as NFC (Near Field Communication) or Bluetooth (registered trademark), without going through the communication network 19.
  • wireless communication intended for relatively short-range communication, such as NFC (Near Field Communication) or Bluetooth (registered trademark), without going through the communication network 19.
  • the ECU 11 is composed of various processors, such as a CPU (Central Processing Unit) and an MPU (Micro Processing Unit).
  • the ECU 11 controls all or part of the functions of the system.
  • the communication unit 12 communicates with various devices (other robots, servers, base stations, etc.) outside the robot MB, and transmits and receives various data.
  • the communication unit 12 communicates with servers and the like on an external network via a base station or access point using wireless communication methods such as 5G (5th generation mobile communication system), LTE (Long Term Evolution), and DSRC (Dedicated Short Range Communications).
  • 5G 5th generation mobile communication system
  • LTE Long Term Evolution
  • DSRC Dedicated Short Range Communications
  • the communication unit 12 can communicate with terminals in the vicinity of the robot MB using P2P (Peer To Peer) technology.
  • Terminals in the vicinity of the robot MB can be, for example, terminals attached to a moving object that moves at a relatively slow speed, such as a pedestrian, terminals installed at a fixed position in a store, or MTC (Machine Type Communication) terminals.
  • the communication unit 12 can, for example, receive from the outside a program for updating the software that controls the operation of the robot MB (over the air).
  • the communication unit 12 can further receive map information and information about the surroundings of the robot MB from the outside.
  • the communication unit 12 can transmit information about the robot MB and information about the surroundings of the robot MB to the outside.
  • Information about the robot MB that the communication unit 12 transmits to the outside includes data indicating the state of the robot MB, the recognition results by the recognition unit 23, etc.
  • the map information storage unit 13 stores one or both of a map acquired from outside and a map created by the robot MB.
  • the map information storage unit 13 stores a three-dimensional high-precision map, a global map that is less accurate than a high-precision map and covers a wide area, etc.
  • a high-precision map is, for example, a point cloud map.
  • a point cloud map is a map made up of a point cloud (point cloud data).
  • the point cloud map may be provided from an external server, etc., or may be created by the robot MB as a map for matching with a local map based on the sensing results of the camera 31, radar 32, LiDAR 33, etc., and stored in the map information storage unit 13.
  • the external recognition sensor 30 includes various sensors used to recognize the external situation of the robot MB, and supplies sensor data from each sensor to each part of the information processing device 10.
  • the type and number of sensors included in the external recognition sensor 30 are arbitrary.
  • the external recognition sensor 30 includes a camera 31, a radar 32, a LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) 33, and an ultrasonic sensor 34.
  • the imaging method of the camera 31 is not particularly limited.
  • cameras of various imaging methods such as a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, and an infrared camera, which are imaging methods capable of distance measurement, can be applied to the camera 31 as necessary.
  • the camera 51 may simply acquire a captured image without distance measurement.
  • the external recognition sensor 30 may include other types of sensors.
  • the external recognition sensor 30 may include a microphone used to detect sounds around the robot MB and the position of the sound source.
  • the storage unit 14 includes at least one of a non-volatile storage medium and a volatile storage medium, and stores data and programs.
  • the storage unit 14 is used, for example, as an EEPROM (Electrically Erasable Programmable Read Only Memory) and a RAM (Random Access Memory), and the storage medium may be a magnetic storage device such as a hard disc drive (HDD), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
  • HDD hard disc drive
  • semiconductor storage device an optical storage device
  • magneto-optical storage device magneto-optical storage device
  • the autonomous driving support unit 15 supports the autonomous driving of the robot MB.
  • the autonomous driving support unit 15 has an analysis unit 20, a behavior planning unit 24, and a motion control unit 25.
  • the analysis unit 20 performs analysis processing of the robot MB and the situation around the robot MB.
  • the analysis unit 20 has a self-position estimation unit 21, a sensor fusion unit 22, and a recognition unit 23.
  • the self-position estimation unit 21 estimates the self-position of the robot MB based on the sensor data from the external recognition sensor 30 and the high-precision map stored in the map information storage unit 13. For example, the self-position estimation unit 21 generates a local map based on the sensor data from the external recognition sensor 30, and estimates the self-position of the robot MB by matching the local map with the high-precision map.
  • the local map is created using a technology such as SLAM (Simultaneous Localization and Mapping).
  • SLAM Simultaneous Localization and Mapping
  • the local map is also used, for example, in the detection and recognition process of the external situation of the robot MB by the recognition unit 23.
  • the sensor fusion unit 22 performs sensor fusion processing to obtain new information by combining multiple different types of sensor data (e.g., image data supplied from the camera 31 and sensor data supplied from the radar 32). Methods for combining different types of sensor data include integration, fusion, and association.
  • the recognition unit 23 executes a detection process to detect the external situation of the robot MB, and a recognition process to recognize the external situation of the robot MB. For example, the recognition unit 23 performs the detection process and the recognition process of the external situation of the robot MB based on information from the external recognition sensor 30, information from the self-position estimation unit 21, information from the sensor fusion unit 22, etc.
  • the recognition unit 23 performs detection processing and recognition processing of objects around the robot MB.
  • Object detection processing is, for example, processing to detect the presence or absence, size, shape, position, movement, etc. of an object.
  • Object recognition processing is, for example, processing to recognize attributes such as the type of object, and to identify a specific object.
  • detection processing and recognition processing are not necessarily clearly separated, and there may be overlap.
  • the recognition unit 23 detects objects around the robot MB by performing clustering to classify a point cloud based on sensor data from the radar 32, LiDAR 33, etc. into point cloud clusters. This allows the presence or absence, size, shape, and position of objects around the robot MB to be detected.
  • the recognition unit 23 detects the movement of objects around the robot MB by performing tracking to follow the movement of the point cloud clusters classified by clustering. This allows the speed and direction of travel (movement vector) of objects around the robot MB to be detected.
  • the recognition unit 23 detects or recognizes people, structures, signs, etc. based on image data supplied from the camera 31.
  • the recognition unit 23 may recognize the types of objects around the robot MB by performing recognition processing such as semantic segmentation.
  • the behavior planning unit 24 creates a behavior plan for the robot MB.
  • the behavior planning unit 24 creates a behavior plan by performing path planning and path following processing.
  • Path planning is a process of planning a rough path from the start to the goal.
  • Path planning is also called trajectory planning, and includes a process of generating a trajectory that allows the robot MB to proceed safely and smoothly in the vicinity of the robot MB on the planned path, taking into account the motion characteristics of the robot MB.
  • Path following is a process of planning operations for traveling safely and accurately along the path planned by the path planning within a planned time.
  • the behavior planning unit 24 can calculate a target speed and a target angular velocity of the robot MB, for example, based on the results of the path following processing.
  • the motion control unit 25 controls the motion of the robot MB to realize the action plan created by the action planning unit 24.
  • the motion control unit 25 moves the robot MB in accordance with the action of the user US.
  • the motion control unit 25 performs acceleration/deceleration control and directional control so that the robot MB proceeds along a trajectory calculated by the trajectory plan.
  • the motion control unit 25 performs cooperative control aimed at realizing functions such as collision avoidance or impact mitigation, and following driving.
  • the HMI 16 inputs various data and instructions, and presents various data to the user US.
  • the HMI 16 is equipped with an input device through which a person can input data.
  • the HMI 16 generates input signals based on the data and instructions input through the input device, and supplies them to each part of the robot MB.
  • the HMI 16 includes input devices such as a touch panel, buttons, and switches.
  • the HMI 16 may further include an input device that allows information to be input by a method other than manual operation, such as voice or gestures.
  • the HMI 16 may use an externally connected device such as an infrared or radio remote control device, or a mobile device or wearable device as an input device.
  • the HMI 16 generates visual and auditory information for the user US.
  • the HMI 16 performs output control to control the output, output content, output timing, and output method of each piece of generated information.
  • the HMI 16 generates and outputs, as visual information, for example, an operation screen, a guide video GV, and warning displays.
  • the HMI 16 generates and outputs, as auditory information, for example, voice guidance, warning sounds, warning messages, etc.
  • an output device for outputting visual information for example, a projector 40 or a direct-view display 50 can be applied.
  • an output device for outputting auditory information for example, an audio speaker, headphones, or earphones can be applied.
  • the HMI 16 functions as an information presentation unit that presents a guide video GV to the user US.
  • the HMI 16 presents a guide video GV from the moving robot MB in accordance with the actions of the user US within the visual field of the user US performing the action.
  • the HMI 16 projects, as the guide video GV, a navigation video NV indicating the location where the user US should perform an action from a projector 40 mounted on the robot MB in accordance with the timing at which the user US should perform the action.
  • the HMI 16 displays, as the guide video GV, an explanatory video EP indicating the content of the action the user US should perform or the method of the action on a direct-view display 50 mounted on the robot MB in accordance with the timing at which the user US should perform the action.
  • At the same time as the action should be performed may be the time when the action is actually performed, or it may be a time shortly before the time when the action is actually performed (just enough time before the time it takes for the user US to understand the guide video GV) so that the user US can perform the action at leisure while watching the guide video GV.
  • the HMI 16 determines the presentation position of the guide video GV based on the visual field range of the user US inferred from the behavioral procedure.
  • the behavior planning unit 24 plans the movement route RTM of the robot MB so that the presentation position of the guide video GV falls within the visual field range of the user US inferred from the behavioral procedure.
  • the presentation position of the guide video GV is within the visual field of the user US
  • the presentation position of the guide video GV means the position of the object surface (road surface, wall, object on the road surface, etc.) onto which the navigation video NV is projected
  • the explanatory video EP it means the position of the display 50 onto which the explanatory video EP is displayed.
  • the HMI 16 monitors the three-dimensional shape of the object surface onto which the guide image GV is projected.
  • the HMI 16 performs correction processing on the guide image GV to correct distortions that occur on the object surface based on the three-dimensional shape of the object surface onto which the guide image GV is to be projected.
  • the presentation position of the guide video GV moves in accordance with the movement of the user US.
  • the action planning unit 24 plans the movement route RTU of the user US based on the action procedure.
  • the HMI 16 presents the guide video GV along the movement route RTU of the user US.
  • the HMI 16 can adjust the presentation timing of the guide video GV based on the situation of the action delay of the user US relative to the schedule shown in the action procedure.
  • the behavior planning unit 24 modifies the route calculated from the behavior procedure based on information about obstacles OB (see FIG. 3) around the user US.
  • the behavior planning unit 24 plans the modified route as the movement route RTU of the user US.
  • the obstacles OB include dynamic obstacles OBD and static obstacles OBS.
  • Dynamic obstacles OBD are obstacles OB whose positions change over time.
  • Static obstacles OBS are obstacles OB whose positions do not change over time and are fixed at a specific location.
  • the behavior planning unit 24 plans a route that can present the guide image GV in the field of view of the user US without being obstructed by obstacles OB around the user US as the movement route RTM of the robot MB.
  • the machine control unit 17 controls the machine of the robot MB.
  • Machine control includes brake control, motor control, etc.
  • FIG. 3 is a diagram showing an example of application of the robot MB to action support in a dance game.
  • Fig. 4 to Fig. 6 are diagrams showing an example of an information processing flow. The flows in Fig. 4 to Fig. 6 will be described below with reference to Fig. 3.
  • the robot MB loads the dance routine data from the memory unit 14 (step SA1).
  • the dance routine includes information on the action procedures related to the dance movements, timing, and dance path (including the position of the stepping feet).
  • the dance routine further includes information on the song used in the dance, the movement path of the robot MB, and the projection position of the guide image GV.
  • the recognition unit 23 scans the surrounding environment using the external recognition sensor 30, and confirms static obstacles OBS around the robot MB and the user US (step SA2).
  • the recognition unit 23 determines whether there is enough space around the user US to perform the entire dance routine (step SA3).
  • the space is determined using known spatial recognition technology. If there is not enough space (step SA3: No), the HMI 16 displays an error message on the display 50 and stops the execution of the dance routine (step SA13).
  • the recognition unit 23 uses the external recognition sensor 30 to scan the surrounding environment and check for dynamic obstacles OBD and occlusions around the robot MB and the user US (step SA4).
  • Occlusion means a state in which an obstacle OB gets between the user US and the robot MB, making the guide image GV invisible. If possible, the behavior planning unit 24 changes the movement path RTM of the robot MB to avoid the dynamic obstacles OBD and occlusions (step SA5).
  • the recognition unit 23 uses the external recognition sensor 30 to scan the surrounding environment and check whether there is enough space around the robot MB and the user US to perform the dance routine for the next n seconds (n is any positive number) (step SA6). "n seconds" is calculated from the movement speed of the user US and is the time until the user US can safely stop the action.
  • step SA6: No If there is not enough space (step SA6: No), the HMI 16 displays an error on the display 50 and stops the execution of the dance routine (step SA13). If there is enough space (step SA6: Yes), the recognition unit 23 automatically tracks the movements of the user US using the external recognition sensor 30 and starts the dance routine (step SA7).
  • the HMI 16 projects and displays the guide video GV using the projector 40 and display 50.
  • the motion control unit 25 moves the robot MB along the movement path RTM planned by the action planning unit 24 (step SA8).
  • the recognition unit 23 uses the external recognition sensor 30 to check whether the user US is dancing within the allowable error (step SA9).
  • the allowable error refers to the error permitted for the motion method and motion timing (target motion) specified in the action procedure.
  • the allowable error can be set as desired by the system designer.
  • step SA9 If the user US is dancing within the allowable error (step SA9: Yes), the recognition unit 23 assigns a score to the dancing skill of the user US and increases the motivation of the user US (step SA12). Then, the process returns to step SA4, and the environmental scan and execution of the dance routine are repeated.
  • step SA9 the recognition unit 23 checks whether the playback speed of the song played as background music is the minimum speed (step SA10). If the song playback speed is not the minimum speed (step SA10: No), the movement control unit 25 slows down the song playback speed to slow down the dancing speed (step SA11). Then, the processing from step SA9 onwards is repeated until the user US can dance within the allowable error.
  • FIG. 7 shows a modified example of step SA13.
  • step SA13 when there is insufficient space, an error message is displayed and the dance routine is stopped.
  • the problem can be dealt with by modifying the dance routine, it is not necessary to stop the entire dance. For example, by changing the dance route or slightly modifying the dance pattern, it may be possible to perform the dance even in a small space.
  • This type of response can be applied to various routines other than dance routines (including tasks and actions other than games).
  • the example in Figure 7 shows an example of how to deal with it by changing the routine. If there is not enough space around the user US to perform the routine indicated in the action procedure, the HMI 16 changes the routine to match the amount of space around the user US. The HMI 16 asks the user US whether or not to execute processing using the changed routine. If the user US requests the changed routine, the HMI 16 implements the changed routine.
  • FIG. 8 is a diagram showing an example of application of the robot MB to behavior support during emergency evacuation.
  • FIG. 9 to FIG. 11 are diagrams showing an example of an information processing flow. Next, the flow of FIG. 9 to FIG. 11 will be described.
  • the robot MB loads emergency evacuation routine data from the memory unit 14 (step SB1).
  • the emergency evacuation routine includes an emergency scenario, an appropriate evacuation route in accordance with the scenario (including marking of dangerous floors and walls), and information on action procedures for evacuation.
  • the emergency evacuation routine further includes information on the movement route of the robot MB and the projection position of the guide image GV.
  • the recognition unit 23 scans the surrounding environment using the external recognition sensor 30 and confirms static obstacles OBS around the robot MB and the user US (step SB2).
  • the recognition unit 23 determines, based on the initial scan result, whether there is enough space around the user US to perform all of the emergency evacuation routines (step SB3). If there is not enough space (step SB3: No), the HMI 16 displays an error on the display 50 and stops the execution of the emergency evacuation routine (step SB11).
  • step SB3 the recognition unit 23 periodically scans the surrounding environment using the external recognition sensor 30 to check for dynamic obstacles OBD and occlusions around the robot MB and the user US (step SB4).
  • the behavior planning unit 24 changes the movement path RTM of the robot MB and the movement path RTU of the user US, if possible, to avoid the dynamic obstacles OBD and occlusions (step SB5).
  • the recognition unit 23 uses the external recognition sensor 30 to scan the surrounding environment and check whether there is enough space around the robot MB and the user US to perform the emergency evacuation routine for the next n seconds (n is any positive number) (step SB6). "n seconds" is the time until the user US can safely stop his/her actions, and is calculated from the movement speed of the user US, etc.
  • step SB6: No If there is not enough space (step SB6: No), the HMI 16 displays an error on the display 50 and stops the execution of the emergency evacuation routine (step SB11). If there is enough space (step SB6: Yes), the recognition unit 23 automatically tracks the movements of the user US using the external recognition sensor 30 and starts the emergency evacuation routine (step SB7).
  • the HMI 16 projects and displays the guide video GV using the projector 40 and display 50.
  • the operation control unit 25 moves the robot MB along the movement route RTM planned by the action planning unit 24 (step SB8).
  • the recognition unit 23 uses the external recognition sensor 30 to check whether the user US is performing evacuation actions within the allowable error (step SB9).
  • step SB9: Yes If the user US is performing evacuation actions within the allowable error (step SB9: Yes), the process returns to step SB4, and the environmental scan and emergency evacuation routine are repeated. If the user US's actions are slower than the target actions beyond the allowable error (step SB9: No), the HMI 16 delays the timing of presenting the guide image GV and slows down the speed of the emergency evacuation actions (step SB10). Then, the process from step SB9 onwards is repeated until the user US can perform evacuation actions within the allowable error.
  • FIG. 12 is a diagram showing an example of application of a robot MB to behavior support for factory work.
  • Fig. 13 to Fig. 15 are diagrams showing an example of an information processing flow. The flows of Fig. 13 to Fig. 15 will be described below with reference to Fig. 12.
  • the robot MB loads work routine data from the memory unit 14 (step SC1).
  • the work routine includes information on the work flow, and the appropriate movement path and behavioral procedures related to the work operations in accordance with the work flow.
  • the work routine further includes information on the movement path of the robot MB and the projection position of the guide image GV.
  • the recognition unit 23 scans the surrounding environment using the external recognition sensor 30 and confirms static obstacles OBS around the robot MB and the user US (step SC2).
  • the recognition unit 23 determines whether there is enough space around the user US to perform all of the work routines (step SC3). If there is not enough space (step SC3: No), the HMI 16 displays an error on the display 50 and stops the execution of the emergency evacuation routine (step SC11).
  • step SC3 the recognition unit 23 periodically scans the surrounding environment using the external recognition sensor 30 to check for dynamic obstacles OBD and occlusions around the robot MB and the user US (step SC4).
  • the behavior planning unit 24 changes the movement path RTM of the robot MB and the movement path RTU of the user US, if possible, to avoid the dynamic obstacles OBD and occlusions (step SC5).
  • the recognition unit 23 uses the external recognition sensor 30 to scan the surrounding environment and check whether there is enough space around the robot MB and the user US to perform the work routine for the next n seconds (n is any positive number) (step SC6). "n seconds" is calculated from the movement speed of the user US as the time until the user US can safely stop the action.
  • step SC6: No If there is not enough space (step SC6: No), the HMI 16 displays an error on the display 50 and stops execution of the work routine (step SC11). If there is enough space (step SC6: Yes), the recognition unit 23 automatically tracks the movements of the user US using the external recognition sensor 30 and starts the work routine (step SC7).
  • the HMI 16 projects and displays the guide image GV using the projector 40 and display 50.
  • the operation control unit 25 moves the robot MB along the movement route RTM planned by the action planning unit 24, and guides the user US to the next work location (step SC8).
  • the recognition unit 23 uses the external recognition sensor 30 to check whether the user US is performing the work operation within the allowable error (step SC9).
  • step SC9 If the user US is performing the work movement within the allowable error (step SC9: Yes), the process returns to step SC4, and the environmental scan and execution of the work routine are repeated. If the user US's movement exceeds the allowable error and is slower than the target movement (step SC9: No), the HMI 16 delays the timing of presenting the guide image GV and slows down the work speed (step SC10). Then, the process from step SC9 onwards is repeated until the user US can perform the work movement within the allowable error.
  • FIG. 3 shows an example in which the robot MB is applied to behavioral support in a dance game.
  • the robot MB can also be applied to other experiential games other than dance games, or to guiding visitors along tour routes for museums, open houses, etc.
  • the robot MB is applied to behavioral support during emergency evacuation.
  • the robot MB can also be applied to general behavioral support during emergencies such as earthquakes, fires, and wars.
  • the robot MB can show procedures such as how to protect the head and how to open a stuck window.
  • the robot MB can show procedures such as how to put out the fire and how to evacuate a building filled with smoke.
  • the robot MB can show procedures such as how to rescue someone who has been killed or injured, is unconscious, or has stopped breathing.
  • a robot MB is applied to behavior support for factory work.
  • the robot MB can be applied to behavior support in various work sites other than factories. Examples of work sites include factories, server farms, and drainage pipe construction, but it is of course possible to apply the robot MB to other work sites as well.
  • the information processing device 10 has an operation control unit 25 and an HMI 16.
  • the operation control unit 25 moves the robot MB in accordance with the action of the user US.
  • the HMI 16 presents a guide image GV showing a recommended action procedure for the user US from the moving robot MB to the visual field of the user US performing the action.
  • the processing of the information processing device 10 is executed by a computer.
  • the program disclosed herein causes the processing of the information processing device 10 to be realized by a computer.
  • the user US can always see the guide video GV, even in a dynamic environment in which the user US is performing an action. Therefore, the robot MB and the user US can have good interactions in a variety of situations.
  • the HMI 16 projects a navigation image NV, which indicates the position where the user US should perform an action, as a guide image GV from a projector 40 mounted on the robot MB in accordance with the timing at which the user US should perform the action.
  • This configuration allows the user US to be guided to an appropriate position and act accordingly.
  • the HMI 16 displays an explanatory video EP, which shows the content of the action that the user US should take or the method of that action, as a guide video GV on a direct-view display 50 mounted on the robot MB in accordance with the timing at which the action of the user US should be performed.
  • the HMI 16 determines the presentation position of the guide image GV based on the visual field range of the user US inferred from the action sequence.
  • the projection position that is easily visible to the user US is appropriately set based on the action procedure.
  • the information processing device 10 has a behavior planning unit 24.
  • the behavior planning unit 24 plans the movement route RTM of the robot MB so that the presentation position of the guide video GV is within the visual field of the user US estimated from the behavior procedure.
  • This configuration ensures that the guide video GV can be viewed by the user US.
  • the behavior planning unit 24 plans the movement route RTU of the user US based on the behavior procedure.
  • the HMI 16 presents the guide image GV along the movement route RTU of the user US.
  • This configuration makes it easier for the user US to view the guide video GV.
  • the behavior planning unit 24 corrects the route calculated from the behavior procedure based on information about obstacles OB around the user US.
  • the behavior planning unit 24 plans the corrected route as the movement route RTU of the user US.
  • This configuration makes it less likely that occlusion will occur due to obstacles OB.
  • the behavior planning unit 24 plans a path that can present the guide image GV in the field of view of the user US without being obstructed by obstacles OB around the user US as the movement path RTM of the robot MB.
  • This configuration makes it less likely that occlusion will occur due to obstacles OB.
  • the HMI 16 performs correction processing on the guide image GV to correct distortion that occurs on the object surface based on the three-dimensional shape of the object surface onto which the guide image GV is projected.
  • This configuration allows the guide image GV to be presented with minimal distortion.
  • the HMI 16 adjusts the timing of presenting the guide image GV based on the action delay status of the user US relative to the schedule shown in the action procedure.
  • the guide video GV is presented at an appropriate time according to the delay situation.
  • the HMI 16 changes the routine to fit the amount of space around the user US.
  • This configuration presents appropriate action steps that are tailored to the surrounding space.
  • Action procedures include motion procedures for a sensory game, work procedures at a workplace, evacuation procedures in the event of an emergency, or first aid procedures for someone who suddenly falls ill.
  • This configuration provides accurate action support in situations where precise action according to procedures is required.
  • the present technology can also be configured as follows.
  • a motion control unit that moves the robot in accordance with the user's actions; an information presenting unit that presents, from the moving robot, within a visual field of the user performing the action, a guide video showing a recommended action procedure for the user;
  • An information processing device having the above configuration.
  • the information presentation unit projects, as the guide image, a navigation image indicating a position where the user's action is to be performed, from a projector mounted on the robot in accordance with a timing at which the user's action is to be performed.
  • the information processing device according to (1) above.
  • the information presentation unit displays, as the guide video, an explanatory video showing the content or method of the action to be taken by the user on a direct-view display mounted on the robot in accordance with the timing at which the action of the user should be taken.
  • the information processing device according to (2) above.
  • the information presenter determines a presentation position of the guide video based on a visual field range of the user estimated from the action procedure.
  • An information processing device according to any one of (1) to (3) above.
  • the action planning unit plans a movement route of the user based on the action procedure;
  • the information presenting unit presents the guide video along a moving route of the user.
  • the information processing device according to (5) above.
  • the action planning unit corrects a route calculated from the action procedure based on information about obstacles around the user, and plans the corrected route as a movement route of the user.
  • the information processing device according to (6) above.
  • the behavior planning unit plans, as a movement path of the robot, a path along which the guide image can be presented in the visual field of the user without being obstructed by obstacles around the user; An information processing device according to any one of (5) to (7) above.
  • the information presentation unit performs a correction process for correcting distortion occurring on the object surface onto which the guide image is projected, based on a three-dimensional shape of the object surface onto which the guide image is projected.
  • An information processing device according to any one of (1) to (8) above.
  • the information presenter adjusts a presentation timing of the guide video based on a situation of an action delay of the user with respect to a schedule indicated in the action procedure.
  • An information processing device according to any one of (1) to (9) above. (11) when there is not enough space around the user to perform the routine indicated in the action procedure, the information presenter changes the routine in accordance with the size of the space around the user.
  • An information processing device according to any one of (1) to (10) above.
  • the action procedure includes an action procedure for a virtual game, a work procedure at a work site, an evacuation procedure in an emergency, or a first aid procedure for a person who suddenly falls ill.
  • the information processing device is a (13)
  • the robot moves according to the user's actions, presenting a guide image showing a recommended action procedure for the user from the moving robot within a visual field of the user performing the action; 23.
  • a computer-implemented information processing method comprising: (14) The robot moves according to the user's actions, presenting a guide image showing a recommended action procedure for the user from the moving robot within a visual field of the user performing the action; A program that makes a computer do this.
  • Information processing device 16 HMI (information presentation unit) 24 Action planning unit 25 Operation control unit 40 Projector 50 Display EP Explanation image GV Guide image MB Robot NV Navigation image OB Obstacle RTM Robot movement route RTU User movement route US User

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Ce dispositif de traitement d'informations comprend une unité de commande de fonctionnement et une unité de présentation d'informations. L'unité de commande de fonctionnement amène un robot à se déplacer de façon à correspondre aux actions d'un utilisateur. L'unité de présentation d'informations présente une vidéo de guidage montrant une procédure d'action d'utilisateur recommandée, ladite vidéo étant présentée à partir du robot qui se déplace et présentée dans le champ de vision de l'utilisateur effectuant une action.
PCT/JP2023/034036 2022-10-28 2023-09-20 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2024090078A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-173500 2022-10-28
JP2022173500 2022-10-28

Publications (1)

Publication Number Publication Date
WO2024090078A1 true WO2024090078A1 (fr) 2024-05-02

Family

ID=90830540

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/034036 WO2024090078A1 (fr) 2022-10-28 2023-09-20 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (1)

Country Link
WO (1) WO2024090078A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007102488A (ja) * 2005-10-04 2007-04-19 Toyota Motor Corp 自律移動装置
US20190135450A1 (en) * 2016-07-04 2019-05-09 SZ DJI Technology Co., Ltd. System and method for automated tracking and navigation
WO2022138476A1 (fr) * 2020-12-23 2022-06-30 パナソニックIpマネジメント株式会社 Procédé de commande de robot, robot et programme

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007102488A (ja) * 2005-10-04 2007-04-19 Toyota Motor Corp 自律移動装置
US20190135450A1 (en) * 2016-07-04 2019-05-09 SZ DJI Technology Co., Ltd. System and method for automated tracking and navigation
WO2022138476A1 (fr) * 2020-12-23 2022-06-30 パナソニックIpマネジメント株式会社 Procédé de commande de robot, robot et programme

Similar Documents

Publication Publication Date Title
US10362429B2 (en) Systems and methods for generating spatial sound information relevant to real-world environments
US9744672B2 (en) Systems and methods for communicating robot intentions to human beings
EP1586423B1 (fr) Dispositif, procede et programme, de commande de robot,
US20200264006A1 (en) Spatial audio navigation
US9552056B1 (en) Gesture enabled telepresence robot and system
JP5324286B2 (ja) ネットワークロボットシステム、ロボット制御装置、ロボット制御方法およびロボット制御プログラム
JP2008084135A (ja) 移動制御方法、移動ロボットおよび移動制御プログラム
US11372618B2 (en) Intercom system for multiple users
JP5411789B2 (ja) コミュニケーションロボット
JP7068986B2 (ja) エージェントシステム、エージェント制御方法、およびプログラム
JP2008254122A (ja) ロボット
CN110554692B (zh) 地图信息更新系统
JPWO2019131198A1 (ja) 制御装置、および制御方法、プログラム、並びに移動体
KR20190003123A (ko) 이동 로봇의 동작 방법
JP2012111011A (ja) 移動ロボット
WO2021241431A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et support d'enregistrement lisible par ordinateur
KR102029943B1 (ko) 유사 힘 디바이스
EP3253283B1 (fr) Dispositif à force d'entraînement par ventilateur
EP3682306B1 (fr) Production de plan d'action lorsque la position propre est inconnue
WO2024090078A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP3768957B2 (ja) 移動ロボットの経路設定方法
KR20210062972A (ko) 훈련 장치 및 훈련 방법
US20230148185A1 (en) Information processing apparatus, information processing method, and recording medium
CN110281950B (zh) 基于三维声像传感器的载运工具控制与可视化环境体验
WO2022145105A1 (fr) Système de navigation personnelle et procédé de navigation personnelle