WO2020203341A1 - Control device, control method, and program - Google Patents

Control device, control method, and program Download PDF

Info

Publication number
WO2020203341A1
WO2020203341A1 PCT/JP2020/012239 JP2020012239W WO2020203341A1 WO 2020203341 A1 WO2020203341 A1 WO 2020203341A1 JP 2020012239 W JP2020012239 W JP 2020012239W WO 2020203341 A1 WO2020203341 A1 WO 2020203341A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
moving body
input
trajectory
customer service
Prior art date
Application number
PCT/JP2020/012239
Other languages
French (fr)
Japanese (ja)
Inventor
啓輔 前田
隆盛 山口
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2020203341A1 publication Critical patent/WO2020203341A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47BTABLES; DESKS; OFFICE FURNITURE; CABINETS; DRAWERS; GENERAL DETAILS OF FURNITURE
    • A47B9/00Tables with tops of variable height
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present technology relates to control devices, control methods, and programs, and in particular, when the movement of a moving body is realized by using AI, the control device that allows the movement of the moving body to be performed within a predetermined specification. Regarding control methods and programs.
  • Moving objects that move autonomously by creating an environment map by sensing the surrounding environment.
  • Moving objects include automobiles, robots, and airplanes.
  • moving bodies that move autonomously, there are moving bodies equipped with so-called AI (Artificial Intelligence) that realizes movement using a neural network generated by performing machine learning such as deep learning.
  • AI Artificial Intelligence
  • This technology was made in view of such a situation, and when the movement of a moving body is realized by using AI, the movement of the moving body is performed within a predetermined specification. ..
  • the control device of one aspect of the present technology inputs input information including at least information about an orbit and information about an obstacle, and inputs the input information to an inference model that outputs output information about movement following the orbit.
  • An input adjustment unit that adjusts according to the situation of the moving body, a generation unit that generates the output information using the inference model, and the moving body based on the output information generated by using the inference model. It is provided with a movement control unit that controls the movement of the.
  • input information including at least information about an orbit and information about an obstacle is input, and input information input to an inference model that outputs output information about movement following the orbit is a moving body.
  • the output information is generated using the inference model, and the movement of the moving object is controlled based on the output information generated using the inference model.
  • FIG. 1 is a diagram showing a usage state of a customer service system according to an embodiment of the present technology.
  • the customer service system of FIG. 1 is used, for example, indoors. There are people (users) in the space where the customer service system is installed.
  • a plurality of cubic customer service robots are prepared in the room.
  • three customer service robots 1-1 to 1-3 are shown.
  • the customer service robot 1 When it is not necessary to distinguish between the customer service robots, they are collectively referred to as the customer service robot 1.
  • the customer service robot 1 is a moving body that moves on the floor surface. On the bottom surface of the customer service robot 1, a configuration such as a tire used for moving the customer service robot 1 is provided.
  • the customer service robot 1 has a function of searching for a user in the room based on an image taken by a camera or the like, and approaching the user detected by the search to serve the customer. For example, the customer service robot 1 performs customer service for answering the questionnaire.
  • the customer service system using the customer service robot 1 is used, for example, in an exhibition venue, a concert venue, a movie theater, an amusement facility, or the like.
  • FIG. 2 is a diagram showing an example of the posture of the customer service robot 1.
  • the state of the customer service robot 1 shown in A of FIG. 2 is the state at the time of movement. While moving to the destination, the customer service robot 1 moves in a substantially cubic state. Even when the robot 1 is waiting at a predetermined place without serving customers, the state of the customer service robot 1 is a substantially cubic state as shown in A of FIG.
  • the state of the customer service robot 1 shown in B of FIG. 2 is a state at the time of interaction, that is, when the customer is being served to the target user.
  • the customer service robot 1 controls its own posture with the top plate raised in order to facilitate work on the top plate.
  • the customer service robot 1 is provided with an arm portion for raising and lowering the top plate.
  • FIG. 3 is an enlarged view of the customer service robot 1 at the time of interaction.
  • the top plate 12 has a built-in data processing terminal 13 such as a tablet terminal having a display equipped with a touch panel.
  • a built-in data processing terminal 13 such as a tablet terminal having a display equipped with a touch panel.
  • characters and images to be a questionnaire are displayed on the display provided in the range indicated by the broken line.
  • the user inputs data such as an answer to a questionnaire by operating a button displayed on the display of the data processing terminal 13 with a finger.
  • the top plate 12 is used as a desk when the user performs work such as answering a questionnaire.
  • the customer service robot 1 closes the upper surface of the housing 11 with the top plate 12 by lowering the top plate 12, and returns to the home position in a state like a simple box shown in A of FIG. ..
  • the customer service system of FIG. 1 is a system in which the customer service robot 1 that blends into the space like a mere box approaches the user and changes the posture as if asking the user to take a questionnaire.
  • the user who saw the top plate 12 of the customer service robot 1 moving near him / herself can intuitively confirm that he / she should answer the questionnaire.
  • the user can answer the questionnaire in a form of communicating with the customer service robot 1.
  • FIG. 4 is an exploded view of the housing 11.
  • panels 22-1 to 22-4 are attached to the side surfaces of the box-shaped main body 21.
  • Panels 22-1 to 22-4 are resin panels that serve as, for example, half mirrors.
  • a depth camera 23 is provided above the front of the main body 21. The shooting by the depth camera 23 is performed through the panel 22-1 attached to the front. LiDAR 24 is provided below the front surface of the main body 21.
  • a columnar support arm 25 is provided on the upper surface of the main body 21. By expanding and contracting the support arm 25 or moving it in the vertical direction, the raising and lowering of the top plate 12 fixed to the upper end of the support arm 25 is controlled. Inside the main body 21, a drive unit such as a motor or a gear for expanding / contracting or moving the support arm 25 in the vertical direction is provided.
  • a computer for performing various processes Inside the main body 21, a computer for performing various processes, a moving mechanism for tires, a power supply, and the like are also provided.
  • Each customer service robot 1 shown in FIG. 1 has the above configuration.
  • FIG. 5 is a diagram showing an example of a track plan.
  • the movement of the customer service robot 1 is performed according to the trajectory (route) planned by the customer service robot 1 itself.
  • the trajectory plan by the customer service robot 1 as shown by the broken line arrow in FIG. 5, the position P1 which is the self-position of the customer service robot 1 and the position P2 which is the position of the target user U1 are connected by a straight line to the position P1. It is done so as to set the coordinates and velocity at each position on the orbit between positions P2.
  • the coordinates are set based on, for example, a predetermined position in the space where the customer service robot 1 is prepared.
  • the speed is the moving speed of the customer service robot 1, and is composed of a translation speed and an angular speed.
  • the trajectory composed of straight lines is planned
  • the trajectory consisting of a combination of straight lines and curves may be planned, or the trajectory composed only of curves may be planned. ..
  • the trajectory plan is repeated after the target user U1 is determined based on the image taken by the camera, for example, without changing the target. That is, the customer service robot 1 continues to recognize the user U1 based on an image or the like taken by the camera, and plans the trajectory so as to follow the user U1. In this way, the trajectory once planned is appropriately modified according to the situation at that time.
  • the space in which the customer service robot 1 is prepared is a space in which there are a plurality of users other than the target user U1.
  • the trajectory correction is also performed when a user other than the target user U1 is detected as an obstacle.
  • FIG. 6 is a diagram showing an example of orbit correction.
  • the situation shown in FIG. 6 is a situation in which the user starts moving according to the trajectory shown in FIG. 5 and an obstacle user is detected at the position P11 on the track.
  • the position P11 is a position closer to the position P2, which is the position of the target user, than the position P1.
  • the orbit is modified so as to avoid the obstacle.
  • the vehicle advances to the position P12 on the orbit indicated by the broken line, and the obstacle avoidance is started at the position P12. After avoiding obstacles and moving to the position P13 on the original orbit, the movement is performed according to the original orbit indicated by the broken line.
  • the movement of the customer service robot 1 is performed so as to follow the original trajectory after avoiding the obstacle, although it once deviates from the trajectory in order to avoid the obstacle.
  • the movement of the customer service robot 1 that follows the trajectory is performed using AI (Artificial Intelligence).
  • AI Artificial Intelligence
  • the customer service robot 1 is a device equipped with AI.
  • FIG. 7 is a diagram showing an example of a trajectory tracking model of the customer service robot 1.
  • the AI mounted on the customer service robot 1 is realized by the trajectory tracking model shown in FIG.
  • the trajectory tracking model is an inference model composed of a neural network generated by performing machine learning such as deep learning.
  • the trajectory tracking model is a model that inputs self-position information, obstacle map information, and trajectory information, and outputs velocity information.
  • the self-position information that is input to the trajectory tracking model is information that represents the self-position of the customer service robot 1 in the space where the customer service robot 1 is prepared.
  • the obstacle map information is information representing a map of the space where the customer service robot 1 is prepared, including information such as the position, size, and shape of the obstacle.
  • the obstacle map information is generated based on, for example, a distance image generated by the depth camera 23 and distance information to an object measured by LiDAR24.
  • the trajectory information is information representing the trajectory planned by the customer service robot 1 itself.
  • the speed information that is the output of the trajectory tracking model is information that represents the speed of the customer service robot 1.
  • the speed information represents the translational speed and the angular velocity at each position of the customer service robot 1.
  • the learning of the orbit tracking model takes, for example, self-position information, obstacle map information, and orbit information as inputs, and gives information for following the orbit and speed information for avoiding obstacles on the orbit. It is done by. That is, the learning of the trajectory tracking model is performed with trajectory tracking and obstacle avoidance as evaluation items.
  • the customer service robot 1 By controlling the speed of the customer service robot 1 based on the speed information output from the trajectory tracking model, the customer service robot 1 follows the trajectory while avoiding obstacles as described with reference to FIG. Movement is realized.
  • the movement of the customer service robot 1 is controlled according to the velocity information obtained by planning the trajectory to the position of the target user and inputting the information of the planned trajectory into the trajectory tracking model.
  • the orbit planning itself is not a black box, it can be said that it is a black box because the method of determining the actual movement method including avoidance of obstacles uses a model for orbit tracking.
  • the customer service robot 1 cannot be moved as expected by the designer because it is unknown what kind of output can be obtained from the trajectory tracking model. there is a possibility.
  • FIG. 8 is a diagram showing an example of movement of the customer service robot 1.
  • the colored user U1 is a target user.
  • the users U11 and U12 are users who are not targets. Since the users U11 and U12 are treated as obstacles, the customer service robot 1 moves so as to avoid the users U11 and U12.
  • the customer service robot 1 treats not only the non-target user but also the target user U1 as an obstacle as shown in A of FIG. To avoid.
  • the trajectory tracking model uses trajectory tracking and obstacle avoidance as evaluation items.
  • the angular velocity is decelerated.
  • the range indicated by the broken line circle is preset as the range in which the deceleration of the angular velocity is started.
  • the customer service robot 1 when staying in the area near the target user U1 for a certain period of time, it is determined that the robot 1 has reached the goal. If it is determined that the robot has arrived at the goal, the movement of the customer service robot 1 is terminated, and an operation for interaction is performed.
  • the range indicated by the broken line circle is set in advance with the goal position as the center as the range for determining that the goal has been reached.
  • the velocity information output from the trajectory tracking model is not used as it is, but the velocity information output from the trajectory tracking model is adjusted as appropriate, and the adjusted velocity information is used. It is realized by controlling the movement of the customer service robot 1.
  • the input to the trajectory tracking model is adjusted, and the inference by the trajectory tracking model is performed using the adjusted input.
  • FIG. 10 is a block diagram showing a configuration example of a customer service system.
  • the customer service system includes a customer service robot 1 and a control device 71.
  • the customer service robot 1 and the control device 71 are connected via wireless communication.
  • the customer service robot 1 is composed of a control unit 51, a moving unit 52, an elevating control unit 53, a camera 54, a sensor 55, a communication unit 56, and a power supply unit 57.
  • the data processing terminal 13 is built in the top plate 12 of the customer service robot 1.
  • the control unit 51 is composed of a computer.
  • the control unit 51 executes a predetermined program by the CPU and controls the overall operation of the customer service robot 1.
  • the control unit 51 has a function as a control device that controls various operations of the customer service robot 1, including the movement of the customer service robot 1 as a moving body.
  • the moving unit 52 rotates the tires by driving the motor and gears, and realizes the movement of the customer service robot 1.
  • the moving unit 52 functions as a moving unit that realizes the movement of the customer service robot 1 while controlling the moving speed and the moving direction (translation speed and angular velocity) according to the control by the control unit 51.
  • the elevating control unit 53 controls the expansion and contraction of the support arm 25 by driving a motor and gears.
  • the camera 54 is composed of the depth camera 23 of FIG. 4 that captures a distance image, an RGB camera that captures an RGB image, an IR camera that captures an IR image, and the like.
  • the image taken by the camera 54 is output to the control unit 51.
  • the sensor 55 is composed of various sensors such as an acceleration sensor, a gyro sensor, a motion sensor, an encoder provided in the moving portion 52 for detecting the amount of rotation of the tire, and LiDAR24. Information representing the sensing result by the sensor 55 is output to the control unit 51.
  • At least one of the camera 54 and the sensor 55 may be provided outside the customer service robot 1.
  • the image taken by the camera 54 provided outside the customer service robot 1 and the information representing the sensing result by the sensor 55 are transmitted to the customer service robot 1 via wireless communication.
  • the communication unit 56 performs wireless communication with the control device 71.
  • the communication unit 56 receives, for example, the information transmitted from the control device 71 and outputs the information to the control unit 51.
  • the power supply unit 57 has a battery.
  • the power supply unit 57 supplies power to each unit of the customer service robot 1.
  • the control device 71 is composed of a data processing device such as a PC.
  • the control device 71 functions as a higher-level system that controls the behavior of each customer service robot 1. For example, the control device 71 instructs each customer service robot 1 to start searching for a user.
  • FIG. 11 is a block diagram showing a functional configuration example of the control unit 51.
  • At least a part of the functional units shown in FIG. 11 is realized by executing a predetermined program by the CPU of the computer constituting the control unit 51.
  • the orbit following unit 107 is composed of an AI orbit following unit 111 and a monitoring unit 112.
  • the environment sensing unit 101 senses the environment around the customer service robot 1 by driving the camera 54 and the sensor 55.
  • Environmental information representing the environmental sensing result is supplied to the self-position identification unit 102 and the obstacle map generation unit 103.
  • the environmental information includes, for example, a distance image taken by the depth camera 23 and distance information to an object measured by LiDAR24.
  • the environmental information may include information representing the measurement results of each sensor constituting the sensor 55, such as information representing the amount of rotation of the tire detected by the encoder.
  • the self-position identification unit 102 identifies the self-position, which is the current position of the customer service robot 1, based on the environmental information supplied from the environment sensing unit 101. Identification of self-position is performed by dead reckoning or dead reckoning.
  • Dead reckoning is a method of estimating the self-position using the output of a sensor inside the robot, such as an axle encoder or IMU (Inertial Measurement Unit).
  • IMU Inertial Measurement Unit
  • star reckoning is a method of estimating the self-position based on the situation of the outside world, such as marker recognition using the camera 54 and recognition using LiDAR SLAM. Dead reckoning and star reckoning may be switched according to the position of the customer service robot 1.
  • the self-machine information including the self-position information representing the self-position identification result is supplied to the arrival determination unit 106 and the trajectory tracking unit 107.
  • the own machine information also includes information indicating the speed of the customer service robot 1 as appropriate.
  • the obstacle map generation unit 103 generates an obstacle map, which is a map showing the arrangement status of obstacles existing in the surroundings, based on the environmental information supplied from the environment sensing unit 101. Obstacles in the surroundings include users.
  • the obstacle map information representing the obstacle map generated by the obstacle map generation unit 103 is supplied to the trajectory tracking unit 107.
  • the application unit 104 searches for users and determines a target user.
  • the application unit 104 outputs the information of the target user to the trajectory planning unit 105 and instructs the start of the movement.
  • the information output to the trajectory planning unit 105 includes, for example, information on the distance and direction to the target user.
  • the trajectory planning unit 105 sets the position of the target user as the goal position and plans the trajectory to the goal position. As described above, the trajectory plan is performed by connecting the position of the customer service robot 1 and the position of the target user with a straight line and setting the coordinates and the speed at each position on the straight line.
  • the track planning unit 105 outputs track information representing the planned track to the arrival determination unit 106.
  • the arrival determination unit 106 determines whether or not the goal has been reached based on the self-position represented by the self-position information supplied from the self-position identification unit 102.
  • FIG. 12 is a diagram showing an example of a region set in the vicinity of the goal position.
  • a goal area A1 and a relaxation area A2 are set around the position P2 which is a position in front of the target user.
  • the goal area A1 is an area determined as having arrived at the goal when the customer service robot 1 enters the area.
  • the arrival determination unit 106 determines that the robot 1 has arrived at the goal.
  • the relaxation area A2 is an area determined as having reached the goal when the customer service robot 1 stays in the area for a certain period of time or longer.
  • the relaxation region A2 is set as a region wider than the goal region A1 so as to include the goal region A1.
  • the arrival determination unit 106 starts counting the staying time.
  • the arrival determination unit 106 determines that the robot 1 has arrived at the goal.
  • the relaxation area A2 is an area in which the condition for determining that the goal has been reached is relaxed.
  • the relaxation area A2 is an area where the translational speed is decelerated (decreased) when the customer service robot 1 enters the area.
  • the deceleration of the translation speed is started, for example, after a certain period of time has elapsed from the customer service robot 1 entering the relaxation area A2.
  • the arrival determination unit 106 decelerates the translation speed represented by the trajectory information supplied from the trajectory planning unit 105, and adjusts the translation speed.
  • the orbit information including the information of is output to the orbit following unit 107.
  • the arrival determination unit 106 determines whether or not the goal has been reached, and receives the orbit information supplied from the orbit planning unit 105, which is an input to the orbit tracking model, according to the situation of the customer service robot 1. It has a function as an input adjustment unit for adjustment.
  • the AI orbit following unit 111 of the orbit following unit 107 has the above-mentioned orbit following model.
  • the AI trajectory tracking unit 111 is supplied from the self-position information included in the self-machine information supplied from the self-position identification unit 102, the obstacle map information supplied from the obstacle map generation unit 103, and the arrival determination unit 106.
  • the orbit information is input to the orbit tracking model.
  • the AI trajectory tracking unit 111 outputs the speed information output from the trajectory tracking model to the monitoring unit 112 in response to inputting each information.
  • the AI trajectory tracking unit 111 inputs self-position information, obstacle map information, and trajectory information to the trajectory tracking model as input information, and uses the trajectory tracking model to generate velocity information as output information. Functions as. Instead of using all of the self-position information, the obstacle map information, and the orbit information as input information, the obstacle map information and the orbit information may be used as the input information, or the self-position information and the obstacle. Information different from the map information and the orbit information may be used as the input information.
  • the monitoring unit 112 adjusts the speed information supplied from the AI trajectory tracking unit 111 as the output of the trajectory tracking model.
  • the speed information is adjusted by the monitoring unit 112 according to the distance from the self-position represented by the self-position information included in the self-position information supplied from the self-position identification unit 102 to the goal position.
  • the velocity information is adjusted, for example, by decelerating (decreasing) the angular velocity of the translational velocity and the angular velocity represented by the velocity information supplied from the AI trajectory following unit 111.
  • FIG. 13 is a diagram showing an example of a region set in the vicinity of the goal position.
  • a deceleration end area A11 and a deceleration start area A12 are set around the position P2 which is a position in front of the target user.
  • the deceleration end area A11 is an area where the deceleration of the angular velocity ends when the customer service robot 1 enters the area.
  • the monitoring unit 112 ends the deceleration of the angular velocity that started when the customer service robot 1 enters the deceleration start area A12.
  • the deceleration of the angular velocity is performed when the customer service robot 1 is outside the deceleration end region A11 and inside the deceleration start region A12.
  • the angular velocity of the customer service robot 1 becomes 0.
  • the customer service robot 1 goes straight in the deceleration end area A11 while facing the target user.
  • the deceleration start area A12 is an area where the deceleration of the angular velocity is started when the customer service robot 1 enters the area.
  • the deceleration start region A12 is set as a region wider than the deceleration end region A11 so as to include the deceleration end region A11.
  • the deceleration start region A12 may be set as the same region as the relaxation region A2 of FIG. 12, or the deceleration start region A12 may be set as a different region.
  • the monitoring unit 112 starts decelerating the angular velocity.
  • the deceleration of the angular velocity is performed so as to cancel the angular velocity represented by the velocity information output by the trajectory tracking model so that the customer service robot 1 approaches the target user while facing the target user.
  • the monitoring unit 112 outputs speed information including the adjusted angular velocity information to the movement control unit 108.
  • the speed information output by the monitoring unit 112 includes information representing the adjusted angular velocity adjusted by the monitoring unit 112 and information representing the translational speed output by the trajectory tracking model of the AI trajectory tracking unit 111.
  • the monitoring unit 112 has a function as an output adjusting unit that monitors the output of the trajectory tracking model and adjusts the speed information output from the trajectory tracking model according to the situation of the customer service robot 1.
  • the movement control unit 108 controls the movement unit 52 according to the speed information supplied from the monitoring unit 112, and moves the customer service robot 1.
  • the movement of the customer service robot 1 is controlled by the movement control unit 108 so as to move according to the adjusted angular velocity adjusted by the monitoring unit 112 and the translational speed output by the trajectory tracking model of the AI trajectory tracking unit 111. ..
  • control unit 51 is such that the condition of determining that the robot has arrived at the goal is relaxed by the arrival determination unit 106 according to the situation of the customer service robot 1 such as the distance to the goal position.
  • control unit 51 is such that the speed information input to the trajectory tracking model is adjusted by the arrival determination unit 106 according to the situation of the customer service robot 1.
  • control unit 51 is such that the speed information output from the trajectory tracking model is adjusted by the monitoring unit 112 according to the situation of the customer service robot 1.
  • step S1 the environment sensing unit 101 senses the environment around the customer service robot 1.
  • step S2 the self-position identification unit 102 identifies the self-position of the customer service robot 1 based on the environmental information output from the environment sensing unit 101.
  • step S3 the obstacle map generation unit 103 generates an obstacle map showing the arrangement status of obstacles existing in the surroundings based on the environmental information output from the environment sensing unit 101.
  • step S4 the application unit 104 searches for users in the vicinity.
  • step S5 the application unit 104 determines one target user from the users found by the search.
  • step S6 the trajectory planning unit 105 sets the goal position based on the user information output from the application unit 104, and plans the trajectory from its own position to the goal position.
  • the orbit information representing the planned orbit is output to the arrival determination unit 106.
  • step S7 arrival determination processes # 1 and # 2 are performed by the arrival determination unit 106. Arrival determination processes # 1 and # 2 determine whether or not the goal has been reached. When it is determined that the goal has been reached, the movement control process of FIG. 14 ends. Details of the arrival determination processes # 1 and # 2 will be described later with reference to the flowcharts of FIGS. 15 and 18, respectively.
  • step S8 the trajectory tracking process is performed by the trajectory tracking unit 107.
  • the trajectory tracking process adjusts the velocity information output from the trajectory tracking model. The details of the trajectory tracking process will be described later with reference to the flowchart of FIG.
  • step S9 the movement control unit 108 controls the movement unit 52 according to the adjusted speed information to move the customer service robot 1. After that, the process returns to step S1 and the above processing is repeated.
  • the arrival determination process # 1 shown in FIG. 15 and the arrival determination process # 2, which will be described later with reference to FIG. 18, are performed in parallel by the arrival determination unit 106.
  • step S21 the arrival determination unit 106 acquires the own machine information output from the self-position identification unit 102.
  • step S22 the arrival determination unit 106 determines whether or not the customer service robot 1 is in the relaxation area A2 (FIG. 12) based on the self-position information included in the own machine information.
  • the arrival determination unit 106 When it is determined in step S22 that the customer service robot 1 is not in the relaxation area A2, the arrival determination unit 106 outputs the trajectory information including the translation speed information before adjustment to the trajectory tracking unit 107 in step S23.
  • the trajectory information including the translation speed information that is not decelerated is output to the trajectory tracking unit 107 when, for example, it is determined for the first time that the customer service robot 1 is not in the relaxation region A2. After that, the process returns to step S21, and the above processing is repeated.
  • step S24 the arrival determination unit 106 determines whether or not a certain time has elapsed since the customer service robot 1 entered the relaxation area A2. To judge.
  • step S24 When it is determined in step S24 that a certain time has not passed since the customer service robot 1 entered the relaxation area A2, after the trajectory information including the translation speed information before adjustment is output in step S23, after step S21. Processing is repeated.
  • step S24 When it is determined in step S24 that a certain time has elapsed since the customer service robot 1 entered the relaxation area A2, the arrival determination unit 106 relaxes the goal condition in step S25.
  • step S26 the arrival determination unit 106 starts decelerating the translation speed.
  • the translational speed is decelerated by multiplying the translational speed by a predetermined coefficient (deceleration coefficient) less than 1.
  • FIG. 17 is a diagram showing an example of deceleration of translational speed.
  • the translational speed is decelerated, for example, for the translational speed at all positions on the orbit.
  • decelerating the translational speed by multiplying the same deceleration coefficient the entire translational speed from the position in the relaxation region A2 to the position of the target user U1 is decelerated.
  • step S27 the arrival determination unit 106 outputs the orbit information including the adjusted translation speed information to the orbit tracking unit 107.
  • step S28 the arrival determination unit 106 detects the translation speed of the customer service robot 1 based on the information included in the own machine information.
  • step S29 the arrival determination unit 106 determines whether or not the translational speed is sufficiently decelerated.
  • step S29 If it is determined in step S29 that the translational speed is not sufficiently decelerated, the process returns to step S21 and the above process is repeated.
  • step S29 determines that the translational speed is sufficiently decelerated.
  • the arrival determination unit 106 determines in step S30 that the goal has been reached. After that, the customer service robot 1 stops in the vicinity of the target user and performs an operation for interaction.
  • the speed information obtained by decelerating the translation speed will be input to the trajectory tracking model.
  • the translation speed output from the trajectory tracking model is also decelerated, and as a result, movement that approaches the target user while decelerating the translation speed is realized.
  • the designer can control the movement of the customer service robot 1 by using the trajectory tracking model, in the vicinity of the target user. It is possible to prevent a movement that causes a sudden stop.
  • step S41 the arrival determination unit 106 acquires the own machine information output from the self-position identification unit 102.
  • step S42 the arrival determination unit 106 determines whether or not the customer service robot 1 is in the goal area A1 based on the self-position information included in the own machine information.
  • step S43 the arrival determination unit 106 determines whether or not the translational speed of the customer service robot 1 is sufficiently decelerated.
  • step S43 If it is determined in step S43 that the translational speed is not sufficiently decelerated, the process returns to step S41 and the above process is repeated. Similarly, when it is determined in step S42 that the customer service robot 1 is not in the goal area A1, the process returns to step S41 and the above process is repeated.
  • step S43 determines that the translational speed is sufficiently decelerated.
  • the arrival determination unit 106 determines in step S44 that the goal has been reached. After that, the customer service robot 1 stops in the vicinity of the target user and performs an operation for interaction.
  • step S51 the trajectory following unit 107 acquires the own machine information output from the self-position identification unit 102.
  • step S52 the AI trajectory tracking unit 111 inputs the self-position information, the obstacle map information, and the trajectory information included in the own machine information into the trajectory tracking model, and infers the speed.
  • the speed information representing the speed of the inference result output from the trajectory tracking model is output to the monitoring unit 112.
  • step S53 the monitoring unit 112 determines whether or not the customer service robot 1 is in the deceleration start area A12 based on the self-position information.
  • the monitoring unit 112 When it is determined in step S53 that the customer service robot 1 is not in the deceleration start area A12, the monitoring unit 112 outputs the speed information output from the trajectory tracking model to the movement control unit 108 as it is in step S54.
  • step S55 the monitoring unit 112 starts decelerating the angular velocity output from the trajectory tracking model.
  • the deceleration of the angular velocity is started.
  • the angular velocity is adjusted by multiplying the angular velocity by a predetermined coefficient (deceleration coefficient) less than 1.
  • step S56 the monitoring unit 112 outputs the speed information including the adjusted angular velocity information to the movement control unit 108.
  • the velocity information output from the monitoring unit 112 includes information on the adjusted angular velocity and information on the translational velocity output from the trajectory tracking model.
  • step S57 the monitoring unit 112 determines whether or not the customer service robot 1 is in the deceleration end area A11 based on the self-position information.
  • step S57 If it is determined in step S57 that the customer service robot 1 is not in the deceleration end area A11, the process returns to step S53 and the above-mentioned process is repeated.
  • step S58 the monitoring unit 112 outputs speed information with the angular velocity set to 0 to the movement control unit 108. After that, the process returns to step S8 of FIG. 14, and the subsequent processing is performed.
  • the velocity information that cancels the angular velocity output by the trajectory tracking model is output to the movement control unit 108.
  • the angular velocity so as to cancel the angular velocity output by the orbit-following model, it is possible to prevent the movement from avoiding the target user, thereby making the target face-to-face.
  • the movement that approaches the user is realized.
  • the angular velocity is set to 0, and the movement is realized so as to go straight toward the goal position.
  • the translational velocity is adjusted by the arrival determination unit 106.
  • the translational velocity and each velocity are decelerated and finally. Specifically, the movement is realized so as to stop while facing each other.
  • the designer can move so as to avoid the target user even when the movement of the customer service robot 1 is controlled by using the trajectory tracking model. Can be prevented from being done. In addition, the designer can stop the user while facing the user.
  • the movement of the customer service robot 1 can be performed within the range of the specifications without performing re-learning.
  • At least a part of the configuration of the control unit 51 shown in FIG. 11 may be realized in the control device 71 of FIG. In this case, the control device 71 transmits a command for controlling the movement to each customer service robot 1. Each customer service robot 1 moves according to the control by the control device 71.
  • the goal position was set targeting the user requesting the questionnaire, but other positions may be set as the goal position.
  • Velocity information representing translational velocity and angular velocity is output from the orbit tracking model, but various information (output information) related to movement that realizes movement following the orbit while avoiding obstacles is output. It may be done. For example, information indicating the acceleration and the moving direction may be output as output information from the trajectory tracking model, or information representing only the moving direction may be output as output information.
  • the technology of adjusting the input to the trajectory tracking model and the output of the trajectory tracking model and controlling the robot behavior using the adjusted output can also be applied when controlling the behavior of various robots other than movement. is there.
  • the AI of the robot is assumed to be realized by the inference model of the neural network generated by machine learning, but it may be realized by various inference models that input predetermined information and obtain a predetermined output. ..
  • FIG. 21 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
  • control device 71 When the movement of the customer service robot 1 is controlled by the control device 71, the control device 71 is configured by a computer as shown in FIG.
  • the CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 1005 is further connected to the bus 1004.
  • An input unit 1006 including a keyboard and a mouse, and an output unit 1007 including a display and a speaker are connected to the input / output interface 1005.
  • the input / output interface 1005 is connected to a storage unit 1008 including a hard disk and a non-volatile memory, a communication unit 1009 including a network interface, and a drive 1010 for driving the removable media 1011.
  • the CPU 1001 loads and executes the program stored in the storage unit 1008 into the RAM 1003 via the input / output interface 1005 and the bus 1004, thereby executing the above-mentioned series of processes. Is done.
  • the program executed by the CPU 1001 is recorded on the removable media 1011 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting, and installed in the storage unit 1008.
  • a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting
  • the program executed by the computer may be a program in which processing is performed in chronological order in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and processed jointly.
  • each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
  • one step includes a plurality of processes
  • the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
  • the input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body.
  • Input adjustment unit and A generation unit that generates the output information by inputting the adjusted input information to the inference model
  • a control device including a movement control unit that controls the movement of the moving body based on the output information generated by using the inference model.
  • the control device according to (1) further comprising a track planning unit for planning the trajectory from the position of the moving body to the goal position.
  • the trajectory planning unit plans the trajectory connecting the position of the moving body and the goal position with a straight line.
  • the control device (4) The control device according to (2) or (3) above, wherein the goal position is the position of a target person. (5) The control device according to any one of (2) to (4), wherein the input adjusting unit adjusts the output information according to the distance to the goal position. (6) The information about the trajectory includes the translational velocity of the moving body and the angular velocity of the moving body. The control device according to (5) above, wherein the input adjusting unit adjusts the translational speed of the moving body. (7) The control device according to (6) above, wherein the input adjusting unit reduces the translational speed of the moving body.
  • the input adjusting unit determines that the moving body has arrived at the goal and inputs the input information to the inference model.
  • the control device according to any one of (2) to (7) above.
  • the control device according to any one of (2) to (8), further comprising an output adjusting unit that adjusts the output information generated by using the inference model according to the distance to the goal position.
  • the output information includes the translational velocity of the moving body and the angular velocity of the moving body.
  • the output adjusting unit adjusts the angular velocity of the moving body.
  • the output adjusting unit starts deceleration of the angular velocity of the moving body and sets it inside the range of the first distance.
  • the control device according to (10) above, wherein when the moving body enters the range of the second distance, the angular velocity of the moving body is adjusted to 0.
  • the inference model is a model generated by machine learning using at least information about the orbit and information about the obstacle, and outputs the output information for avoiding the obstacle. 11) The control device according to any one of.
  • the control device The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. And The output information is generated by inputting the adjusted input information into the inference model.
  • On the computer The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. And The output information is generated by inputting the adjusted input information into the inference model.
  • 1-1 to 1-3 Customer service robot 11 housing, 12 top plate, 13 data processing terminal, 21 main body, 22-1 to 22-4 panel, 23 depth camera, 24 LiDAR, 25 support arm, 51 control unit , 52 moving unit, 53 elevating control unit, 54 camera, 55 sensor, 56 communication unit, 57 power supply unit, 71 control device, 101 environment sensing unit, 102 self-position identification unit, 103 obstacle map generation unit, 104 application unit, 105 orbit planning unit, 106 arrival judgment unit, 107 orbit tracking unit, 108 movement control unit, 111 AI orbit tracking unit, 112 monitoring unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The present technique relates to a control device, a control method, and a program that allow a moving body to move within a range in predetermined specifications when an AI is used to realize the movement of the moving body. The control device to which the present technique is applied uses, as an input, input information including at least information related to a trajectory and information related to an obstacle and adjusts, depending on the situation of a moving body, the input information to be input to an inference model that outputs output information related to movement following the trajectory. The control device inputs the adjusted input information to the inference model to generate output information, and controls movement of the moving body on the basis of the output information generated using the inference model. The present technique is applicable to a robot capable of movement.

Description

制御装置、制御方法、およびプログラムControls, control methods, and programs
 本技術は、制御装置、制御方法、およびプログラムに関し、特に、AIを用いて移動体の移動を実現する場合において、移動体の移動が所定の仕様の範囲内で行われるようにした制御装置、制御方法、およびプログラムに関する。 The present technology relates to control devices, control methods, and programs, and in particular, when the movement of a moving body is realized by using AI, the control device that allows the movement of the moving body to be performed within a predetermined specification. Regarding control methods and programs.
 従来、周囲の環境をセンシングすることによって環境マップを作成し、自律的に移動する移動体がある。移動体には、自動車、ロボット、飛行機などがある。 Conventionally, there are moving objects that move autonomously by creating an environment map by sensing the surrounding environment. Moving objects include automobiles, robots, and airplanes.
特開2013-31897号公報Japanese Unexamined Patent Publication No. 2013-31897 特開2013-22705号公報Japanese Unexamined Patent Publication No. 2013-22705 特開2012-236244号公報Japanese Unexamined Patent Publication No. 2012-236244
 自律的に移動する移動体の中には、ディープラーニングなどの機械学習を行うことによって生成されたニューラルネットワークを用いて移動を実現する、いわゆるAI(Artificial Intelligence)を搭載した移動体がある。 Among the moving bodies that move autonomously, there are moving bodies equipped with so-called AI (Artificial Intelligence) that realizes movement using a neural network generated by performing machine learning such as deep learning.
 周囲の環境の情報を入力として、移動の仕方を表す情報を出力するニューラルネットワークを用いて移動体の移動を制御することが考えられる。この場合、ニューラルネットワークの内部の処理はいわばブラックボックスといえるものであるから、設計者の期待通りの移動を保証できなくなるおそれがある。 It is conceivable to control the movement of a moving object by using a neural network that inputs information on the surrounding environment and outputs information indicating how to move. In this case, since the internal processing of the neural network can be said to be a black box, there is a risk that the movement as expected by the designer cannot be guaranteed.
 本技術はこのような状況に鑑みてなされたものであり、AIを用いて移動体の移動を実現する場合において、移動体の移動が所定の仕様の範囲内で行われるようにするものである。 This technology was made in view of such a situation, and when the movement of a moving body is realized by using AI, the movement of the moving body is performed within a predetermined specification. ..
 本技術の一側面の制御装置は、軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、前記軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する前記入力情報を、移動体の状況に応じて調整する入力調整部と、前記推論モデルを用いて前記出力情報を生成する生成部と、前記推論モデルを用いて生成された前記出力情報に基づいて前記移動体の移動を制御する移動制御部とを備える。 The control device of one aspect of the present technology inputs input information including at least information about an orbit and information about an obstacle, and inputs the input information to an inference model that outputs output information about movement following the orbit. An input adjustment unit that adjusts according to the situation of the moving body, a generation unit that generates the output information using the inference model, and the moving body based on the output information generated by using the inference model. It is provided with a movement control unit that controls the movement of the.
 本技術の一側面においては、軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する入力情報が、移動体の状況に応じて調整され、推論モデルを用いて出力情報が生成され、推論モデルを用いて生成された出力情報に基づいて移動体の移動が制御される。 In one aspect of the present technology, input information including at least information about an orbit and information about an obstacle is input, and input information input to an inference model that outputs output information about movement following the orbit is a moving body. The output information is generated using the inference model, and the movement of the moving object is controlled based on the output information generated using the inference model.
本技術の一実施形態に係る接客システムの使用状態を示す図である。It is a figure which shows the use state of the customer service system which concerns on one Embodiment of this technique. 接客ロボットの姿勢の例を示す図である。It is a figure which shows the example of the posture of a customer service robot. インタラクション時の接客ロボットを拡大して示す図である。It is an enlarged view which shows the customer service robot at the time of interaction. 筐体を分解して示す図である。It is a figure which shows by disassembling the housing. 軌道計画の例を示す図である。It is a figure which shows the example of the trajectory plan. 軌道の修正の例を示す図である。It is a figure which shows the example of the correction of an orbit. 接客ロボットが有する軌道追従用モデルの例を示す図である。It is a figure which shows the example of the trajectory tracking model which a customer service robot has. 接客ロボットの移動の例を示す図である。It is a figure which shows the example of the movement of a customer service robot. 接客ロボットの移動の調整例を示す図である。It is a figure which shows the adjustment example of the movement of a customer service robot. 接客システムの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the customer service system. 制御部の機能構成例を示すブロック図である。It is a block diagram which shows the functional structure example of a control part. ゴール位置の近傍に設定された領域の例を示す図である。It is a figure which shows the example of the area set in the vicinity of a goal position. ゴール位置の近傍に設定された領域の例を示す図である。It is a figure which shows the example of the area set in the vicinity of a goal position. 移動制御処理について説明するフローチャートである。It is a flowchart explaining the movement control processing. 図14のステップS7において行われる到着判定処理#1について説明するフローチャートである。It is a flowchart explaining the arrival determination process # 1 performed in step S7 of FIG. 緩和領域に接客ロボットが入っている様子を示す図である。It is a figure which shows the state that the customer service robot is in a relaxation area. 並進速度の減速の例を示す図である。It is a figure which shows the example of deceleration of a translational speed. 図14のステップS7において行われる到着判定処理#2について説明するフローチャートである。It is a flowchart explaining the arrival determination process # 2 performed in step S7 of FIG. 図14のステップS8において行われる軌道追従処理について説明するフローチャートである。It is a flowchart explaining the trajectory follow-up process performed in step S8 of FIG. 減速開始領域に接客ロボットが入っている様子を示す図である。It is a figure which shows the state that the customer service robot is in the deceleration start area. コンピュータの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a computer.
 以下、本技術を実施するための形態について説明する。説明は以下の順序で行う。
 1.接客システムの用途
 2.AIを用いた接客ロボット1の移動について
 3.接客システムの構成例
 4.接客ロボット1の動作
 5.変形例
Hereinafter, modes for implementing the present technology will be described. The explanation will be given in the following order.
1. 1. Applications of customer service system 2. About the movement of the customer service robot 1 using AI 3. Configuration example of customer service system 4. Operation of customer service robot 1 5. Modification example
<接客システムの用途>
 図1は、本技術の一実施形態に係る接客システムの使用状態を示す図である。
<Use of customer service system>
FIG. 1 is a diagram showing a usage state of a customer service system according to an embodiment of the present technology.
 図1の接客システムは、例えば室内において用いられる。接客システムが設置された空間内には人(ユーザ)が存在する。 The customer service system of FIG. 1 is used, for example, indoors. There are people (users) in the space where the customer service system is installed.
 図1に示すように、室内には、立方体状の接客ロボットが複数用意される。図1の例においては、接客ロボット1-1乃至1-3の3台の接客ロボットが示されている。それぞれの接客ロボットを区別する必要がない場合、適宜、まとめて接客ロボット1という。 As shown in FIG. 1, a plurality of cubic customer service robots are prepared in the room. In the example of FIG. 1, three customer service robots 1-1 to 1-3 are shown. When it is not necessary to distinguish between the customer service robots, they are collectively referred to as the customer service robot 1.
 接客ロボット1は、床面上を移動する移動体である。接客ロボット1の底面には、接客ロボット1の移動に用いられるタイヤなどの構成が設けられている。 The customer service robot 1 is a moving body that moves on the floor surface. On the bottom surface of the customer service robot 1, a configuration such as a tire used for moving the customer service robot 1 is provided.
 接客ロボット1は、カメラにより撮影された画像などに基づいて、室内にいるユーザの探索を行い、探索によって検出したユーザに近づいて接客を行う機能を有する。例えば、アンケートに回答してもらうための接客が接客ロボット1により行われる。接客ロボット1を用いた接客システムは、例えば、展示会の会場、コンサート会場、映画館、アミューズメント施設などにおいて用いられる。 The customer service robot 1 has a function of searching for a user in the room based on an image taken by a camera or the like, and approaching the user detected by the search to serve the customer. For example, the customer service robot 1 performs customer service for answering the questionnaire. The customer service system using the customer service robot 1 is used, for example, in an exhibition venue, a concert venue, a movie theater, an amusement facility, or the like.
 図2は、接客ロボット1の姿勢の例を示す図である。 FIG. 2 is a diagram showing an example of the posture of the customer service robot 1.
 図2のAに示す接客ロボット1の状態は、移動時の状態である。目的地まで移動する間、接客ロボット1は、略立方体の状態で移動する。接客を行わずに所定の場所で待機している場合も、接客ロボット1の状態は、図2のAに示すような略立方体の状態となる。 The state of the customer service robot 1 shown in A of FIG. 2 is the state at the time of movement. While moving to the destination, the customer service robot 1 moves in a substantially cubic state. Even when the robot 1 is waiting at a predetermined place without serving customers, the state of the customer service robot 1 is a substantially cubic state as shown in A of FIG.
 図2のBに示す接客ロボット1の状態は、インタラクション時、すなわち、ターゲットとなるユーザに対して接客を行っているときの状態である。接客を行う場合、接客ロボット1は、天板上で作業を行いやすくするために、天板を上げた状態に自身の姿勢を制御する。接客ロボット1には、天板を昇降させるためのアーム部が設けられている。 The state of the customer service robot 1 shown in B of FIG. 2 is a state at the time of interaction, that is, when the customer is being served to the target user. When serving customers, the customer service robot 1 controls its own posture with the top plate raised in order to facilitate work on the top plate. The customer service robot 1 is provided with an arm portion for raising and lowering the top plate.
 図3は、インタラクション時の接客ロボット1を拡大して示す図である。 FIG. 3 is an enlarged view of the customer service robot 1 at the time of interaction.
 破線で示すように、天板12には、タッチパネルを搭載したディスプレイを有するタブレット端末などのデータ処理端末13が内蔵されている。インタラクション時、破線で示す範囲に設けられたディスプレイには、アンケートとなる文字や画像が表示される。ユーザは、データ処理端末13のディスプレイに表示されたボタンを指で操作するなどして、アンケートに対する回答などのデータを入力する。 As shown by the broken line, the top plate 12 has a built-in data processing terminal 13 such as a tablet terminal having a display equipped with a touch panel. At the time of interaction, characters and images to be a questionnaire are displayed on the display provided in the range indicated by the broken line. The user inputs data such as an answer to a questionnaire by operating a button displayed on the display of the data processing terminal 13 with a finger.
 このように、天板12は、ユーザがアンケートに回答するなどの作業を行う際の机として用いられる。 In this way, the top plate 12 is used as a desk when the user performs work such as answering a questionnaire.
 アンケートが終了した場合、接客ロボット1は、天板12を下げることによって筐体11の上面を天板12で塞ぎ、図2のAに示す単なる箱のような状態になって、ホームポジションに戻る。 When the questionnaire is completed, the customer service robot 1 closes the upper surface of the housing 11 with the top plate 12 by lowering the top plate 12, and returns to the home position in a state like a simple box shown in A of FIG. ..
 このように、図1の接客システムは、単なる箱のように空間に溶け込んでいる接客ロボット1がユーザに近づき、ユーザにお願いするかのように姿勢を変形させてアンケートをとるシステムとなる。 In this way, the customer service system of FIG. 1 is a system in which the customer service robot 1 that blends into the space like a mere box approaches the user and changes the posture as if asking the user to take a questionnaire.
 自分の近くに移動してきた接客ロボット1の天板12が上がるのを見たユーザは、アンケートに回答すればいいことを直感的に確認することができる。また、ユーザは、接客ロボット1との間でコミュニケーションをとるような形で、アンケートに答えることができる。 The user who saw the top plate 12 of the customer service robot 1 moving near him / herself can intuitively confirm that he / she should answer the questionnaire. In addition, the user can answer the questionnaire in a form of communicating with the customer service robot 1.
 図4は、筐体11を分解して示す図である。 FIG. 4 is an exploded view of the housing 11.
 図4に示すように、箱状の本体部21の側面には、パネル22-1乃至22-4が貼り付けられる。パネル22-1乃至22-4は例えばハーフミラーとなる樹脂製のパネルである。 As shown in FIG. 4, panels 22-1 to 22-4 are attached to the side surfaces of the box-shaped main body 21. Panels 22-1 to 22-4 are resin panels that serve as, for example, half mirrors.
 本体部21の正面上方には、デプスカメラ23が設けられる。デプスカメラ23による撮影は、正面に貼り付けられたパネル22-1越しに行われる。本体部21の正面下方にはLiDAR24が設けられる。 A depth camera 23 is provided above the front of the main body 21. The shooting by the depth camera 23 is performed through the panel 22-1 attached to the front. LiDAR 24 is provided below the front surface of the main body 21.
 本体部21の上面には、円柱状の支持アーム25が設けられる。支持アーム25を伸縮させたり上下方向に移動させたりすることによって、支持アーム25の上端に固定された天板12の昇降が制御される。本体部21の内部には、支持アーム25を伸縮させたり上下方向に移動させたりするためのモーターやギアなどの駆動部が設けられる。 A columnar support arm 25 is provided on the upper surface of the main body 21. By expanding and contracting the support arm 25 or moving it in the vertical direction, the raising and lowering of the top plate 12 fixed to the upper end of the support arm 25 is controlled. Inside the main body 21, a drive unit such as a motor or a gear for expanding / contracting or moving the support arm 25 in the vertical direction is provided.
 本体部21の内部には、各種の処理を行うコンピュータ、タイヤなどの移動機構、電源などの構成も設けられる。 Inside the main body 21, a computer for performing various processes, a moving mechanism for tires, a power supply, and the like are also provided.
 図1に示すそれぞれの接客ロボット1が、以上のような構成を有している。 Each customer service robot 1 shown in FIG. 1 has the above configuration.
<AIを用いた接客ロボット1の移動について>
 図5は、軌道計画の例を示す図である。
<About the movement of the customer service robot 1 using AI>
FIG. 5 is a diagram showing an example of a track plan.
 接客ロボット1の移動は、接客ロボット1自身が計画した軌道(経路)に従って行われる。接客ロボット1による軌道計画は、図5の破線矢印で示すように、接客ロボット1の自己位置である位置P1と、ターゲットとなるユーザU1の位置である位置P2とを直線で結び、位置P1と位置P2の間の軌道上の各位置における座標と速度を設定するようにして行われる。 The movement of the customer service robot 1 is performed according to the trajectory (route) planned by the customer service robot 1 itself. In the trajectory plan by the customer service robot 1, as shown by the broken line arrow in FIG. 5, the position P1 which is the self-position of the customer service robot 1 and the position P2 which is the position of the target user U1 are connected by a straight line to the position P1. It is done so as to set the coordinates and velocity at each position on the orbit between positions P2.
 座標は、例えば、接客ロボット1が用意された空間の所定の位置を基準として設定される。速度は、接客ロボット1の移動速度であり、並進速度と角速度から構成される。 The coordinates are set based on, for example, a predetermined position in the space where the customer service robot 1 is prepared. The speed is the moving speed of the customer service robot 1, and is composed of a translation speed and an angular speed.
 直線で構成される軌道が計画されるものとしたが、直線と曲線とを組み合わせた軌道が計画されるようにしてもよいし、曲線だけで構成される軌道が計画されるようにしてもよい。 Although it is assumed that the trajectory composed of straight lines is planned, the trajectory consisting of a combination of straight lines and curves may be planned, or the trajectory composed only of curves may be planned. ..
 軌道計画は、カメラにより撮影された画像などに基づいてターゲットとなるユーザU1が決定された後、例えばターゲットを変えずに繰り返し行われる。すなわち、接客ロボット1は、カメラにより撮影された画像などに基づいてユーザU1を認識し続け、ユーザU1に追従するようにして軌道計画を行うことになる。このように、一度計画された軌道は、そのときの状況に応じて適宜修正される。 The trajectory plan is repeated after the target user U1 is determined based on the image taken by the camera, for example, without changing the target. That is, the customer service robot 1 continues to recognize the user U1 based on an image or the like taken by the camera, and plans the trajectory so as to follow the user U1. In this way, the trajectory once planned is appropriately modified according to the situation at that time.
 ここで、接客ロボット1が用意された空間は、ターゲットとなるユーザU1以外にも複数のユーザがいる空間である。軌道の修正は、ターゲットとなるユーザU1以外のユーザなどが、障害物として検出された場合にも行われる。 Here, the space in which the customer service robot 1 is prepared is a space in which there are a plurality of users other than the target user U1. The trajectory correction is also performed when a user other than the target user U1 is detected as an obstacle.
 図6は、軌道の修正の例を示す図である。 FIG. 6 is a diagram showing an example of orbit correction.
 図6に示す状況は、図5に示す軌道に従って移動を開始し、軌道上の位置P11において、障害物となるユーザが検出された状況である。位置P11は、位置P1より、ターゲットとなるユーザの位置である位置P2に近い位置である。 The situation shown in FIG. 6 is a situation in which the user starts moving according to the trajectory shown in FIG. 5 and an obstacle user is detected at the position P11 on the track. The position P11 is a position closer to the position P2, which is the position of the target user, than the position P1.
 図6の一点鎖線で示すように、障害物が軌道上に存在する場合、障害物を回避するように軌道が修正される。図6の例においては、破線で示す軌道上を位置P12まで進み、位置P12において、障害物の回避が開始されている。障害物を回避して元の軌道上の位置P13に移動した後、破線で示す元の軌道に従って移動が行われる。 As shown by the alternate long and short dash line in FIG. 6, when an obstacle exists in the orbit, the orbit is modified so as to avoid the obstacle. In the example of FIG. 6, the vehicle advances to the position P12 on the orbit indicated by the broken line, and the obstacle avoidance is started at the position P12. After avoiding obstacles and moving to the position P13 on the original orbit, the movement is performed according to the original orbit indicated by the broken line.
 このように、接客ロボット1の移動は、障害物がある場合にはそれを回避するために軌道から一度外れるものの、障害物を回避した後は、元の軌道に追従するようにして行われる。 In this way, the movement of the customer service robot 1 is performed so as to follow the original trajectory after avoiding the obstacle, although it once deviates from the trajectory in order to avoid the obstacle.
 軌道に追従する接客ロボット1の移動は、AI(Artificial Intelligence)を用いて行われる。接客ロボット1は、AIを搭載した機器である。 The movement of the customer service robot 1 that follows the trajectory is performed using AI (Artificial Intelligence). The customer service robot 1 is a device equipped with AI.
 図7は、接客ロボット1が有する軌道追従用モデルの例を示す図である。 FIG. 7 is a diagram showing an example of a trajectory tracking model of the customer service robot 1.
 接客ロボット1が搭載するAIが、図7に示す軌道追従用モデルにより実現される。軌道追従用モデルは、例えばディープラーニングなどの機械学習が行われることによって生成されたニューラルネットワークにより構成される推論モデルである。 The AI mounted on the customer service robot 1 is realized by the trajectory tracking model shown in FIG. The trajectory tracking model is an inference model composed of a neural network generated by performing machine learning such as deep learning.
 図7に示すように、軌道追従用モデルは、自己位置情報、障害物地図情報、および軌道情報を入力とし、速度情報を出力とするモデルである。 As shown in FIG. 7, the trajectory tracking model is a model that inputs self-position information, obstacle map information, and trajectory information, and outputs velocity information.
 軌道追従用モデルの入力となる自己位置情報は、接客ロボット1が用意された空間における接客ロボット1の自己位置を表す情報である。 The self-position information that is input to the trajectory tracking model is information that represents the self-position of the customer service robot 1 in the space where the customer service robot 1 is prepared.
 障害物地図情報は、障害物の位置、大きさ、形状などの情報を含む、接客ロボット1が用意された空間の地図を表す情報である。障害物地図情報は、例えばデプスカメラ23により生成された距離画像、LiDAR24により計測された対象物までの距離情報に基づいて生成される。 The obstacle map information is information representing a map of the space where the customer service robot 1 is prepared, including information such as the position, size, and shape of the obstacle. The obstacle map information is generated based on, for example, a distance image generated by the depth camera 23 and distance information to an object measured by LiDAR24.
 軌道情報は、接客ロボット1自身が計画した軌道を表す情報である。 The trajectory information is information representing the trajectory planned by the customer service robot 1 itself.
 一方、軌道追従用モデルの出力となる速度情報は、接客ロボット1の速度を表す情報である。速度情報により、接客ロボット1の各位置における並進速度と角速度が表される。 On the other hand, the speed information that is the output of the trajectory tracking model is information that represents the speed of the customer service robot 1. The speed information represents the translational speed and the angular velocity at each position of the customer service robot 1.
 軌道追従用モデルの学習は、例えば、自己位置情報、障害物地図情報、および軌道情報を入力とし、軌道に追従するための情報と、軌道上の障害物を回避するための速度情報とを与えることによって行われる。すなわち、軌道追従用モデルの学習は、軌道の追従と、障害物の回避とを評価項目として行われる。 The learning of the orbit tracking model takes, for example, self-position information, obstacle map information, and orbit information as inputs, and gives information for following the orbit and speed information for avoiding obstacles on the orbit. It is done by. That is, the learning of the trajectory tracking model is performed with trajectory tracking and obstacle avoidance as evaluation items.
 軌道追従用モデルから出力された速度情報に基づいて接客ロボット1の速度が制御されることにより、図6を参照して説明したような、障害物を回避しつつ、軌道に追従する接客ロボット1の移動が実現される。 By controlling the speed of the customer service robot 1 based on the speed information output from the trajectory tracking model, the customer service robot 1 follows the trajectory while avoiding obstacles as described with reference to FIG. Movement is realized.
 混雑している空間などの複雑な状況で障害物を避けながら移動するプログラムを人が設計することは困難である。軌道追従用モデルを用いることにより、障害物を避けながら軌道に追従して移動することを複雑な状況においても実現することが可能となる。 It is difficult for a person to design a program that moves while avoiding obstacles in a complicated situation such as a crowded space. By using the orbit-following model, it is possible to move following the orbit while avoiding obstacles even in a complicated situation.
 このように、接客ロボット1の移動は、ターゲットとなるユーザの位置までの軌道を計画し、計画した軌道の情報を軌道追従用モデルに入力して得られた速度情報に従って制御される。 In this way, the movement of the customer service robot 1 is controlled according to the velocity information obtained by planning the trajectory to the position of the target user and inputting the information of the planned trajectory into the trajectory tracking model.
 軌道の計画自体はブラックボックスではないものの、障害物の回避を含む実際の移動の仕方の決め方は、軌道追従用モデルを用いるものであるから、いわばブラックボックスといえるものである。 Although the orbit planning itself is not a black box, it can be said that it is a black box because the method of determining the actual movement method including avoidance of obstacles uses a model for orbit tracking.
 軌道追従用モデルの出力をそのまま用いて移動を制御するとした場合、どのような出力が軌道追従用モデルから得られるのか分からないことから、設計者の期待通りに接客ロボット1を移動させることができない可能性がある。 If the movement is controlled by using the output of the trajectory tracking model as it is, the customer service robot 1 cannot be moved as expected by the designer because it is unknown what kind of output can be obtained from the trajectory tracking model. there is a possibility.
 図8は、接客ロボット1の移動の例を示す図である。 FIG. 8 is a diagram showing an example of movement of the customer service robot 1.
 図8において、色を付して示すユーザU1は、ターゲットとなるユーザである。一方、ユーザU11,U12は、ターゲットではないユーザである。ユーザU11,U12は障害物として扱われることから、接客ロボット1は、ユーザU11,U12を回避するようにして移動することになる。 In FIG. 8, the colored user U1 is a target user. On the other hand, the users U11 and U12 are users who are not targets. Since the users U11 and U12 are treated as obstacles, the customer service robot 1 moves so as to avoid the users U11 and U12.
 軌道追従用モデルの出力をそのまま用いて移動を制御するとした場合、接客ロボット1は、図8のAに示すように、ターゲットではないユーザだけでなく、ターゲットとなるユーザU1についても、障害物として回避する。上述したように、軌道追従用モデルは、軌道の追従と障害物の回避とを評価項目としたものである。 When the movement is controlled by using the output of the trajectory tracking model as it is, the customer service robot 1 treats not only the non-target user but also the target user U1 as an obstacle as shown in A of FIG. To avoid. As described above, the trajectory tracking model uses trajectory tracking and obstacle avoidance as evaluation items.
 また、図8のBに示すように、ターゲットとなるユーザU1が近づいてきた場合、接客ロボット1は、ゴール位置に到着することができず、ゴール位置の近傍で迷走する(ウロウロする)ように移動する。 Further, as shown in B of FIG. 8, when the target user U1 approaches, the customer service robot 1 cannot reach the goal position and strays (wanders) in the vicinity of the goal position. Moving.
 そこで、接客ロボット1においては、図9のAに示すように、ターゲットとなるユーザU1に近づいた場合、角速度の減速が行われる。図9のAにおいて、破線の円で示す範囲は、角速度の減速を開始する範囲としてあらかじめ設定されたものである。 Therefore, in the customer service robot 1, as shown in A of FIG. 9, when the target user U1 is approached, the angular velocity is decelerated. In A of FIG. 9, the range indicated by the broken line circle is preset as the range in which the deceleration of the angular velocity is started.
 これにより、図8のAに示すような、ターゲットとなるユーザU1を避けてしまうといったことが抑制される。 As a result, it is possible to prevent the target user U1 from being avoided as shown in A of FIG.
 また、接客ロボット1においては、図9のBに示すように、ターゲットとなるユーザU1の近くの領域内に一定時間滞在した場合、ゴールに到着したものとして判定される。ゴールに到着したものとして判定された場合、接客ロボット1の移動は終了となり、インタラクションのための動作が行われる。図9のBにおいて、破線の円で示す範囲は、ゴールに到着したものと判定する範囲として、ゴール位置を中心としてあらかじめ設定されたものである。 Further, in the customer service robot 1, as shown in B of FIG. 9, when staying in the area near the target user U1 for a certain period of time, it is determined that the robot 1 has reached the goal. If it is determined that the robot has arrived at the goal, the movement of the customer service robot 1 is terminated, and an operation for interaction is performed. In B of FIG. 9, the range indicated by the broken line circle is set in advance with the goal position as the center as the range for determining that the goal has been reached.
 これにより、図8のBに示すような、ゴール位置の近傍で迷走するといったことが抑制される。 As a result, straying in the vicinity of the goal position as shown in B of FIG. 8 is suppressed.
 図9に示すような移動は、軌道追従用モデルから出力された速度情報をそのまま用いるのではなく、適宜、軌道追従用モデルから出力された速度情報の調整を行い、調整後の速度情報を用いて、接客ロボット1の移動を制御することによって実現される。 For the movement as shown in FIG. 9, the velocity information output from the trajectory tracking model is not used as it is, but the velocity information output from the trajectory tracking model is adjusted as appropriate, and the adjusted velocity information is used. It is realized by controlling the movement of the customer service robot 1.
 また、軌道追従用モデルに対する入力の調整が行われ、調整後の入力を用いて、軌道追従用モデルによる推論が行われる。 In addition, the input to the trajectory tracking model is adjusted, and the inference by the trajectory tracking model is performed using the adjusted input.
 このような仕様に従って接客ロボット1の移動を制御する一連の処理についてはフローチャートを参照して後述する。 A series of processes for controlling the movement of the customer service robot 1 according to such specifications will be described later with reference to the flowchart.
<接客システムの構成例>
 図10は、接客システムの構成例を示すブロック図である。
<Example of customer service system configuration>
FIG. 10 is a block diagram showing a configuration example of a customer service system.
 図10に示すように、接客システムは、接客ロボット1と制御装置71により構成される。接客ロボット1と制御装置71の間は、無線の通信を介して接続される。 As shown in FIG. 10, the customer service system includes a customer service robot 1 and a control device 71. The customer service robot 1 and the control device 71 are connected via wireless communication.
 接客ロボット1は、制御部51、移動部52、昇降制御部53、カメラ54、センサ55、通信部56、および電源部57により構成される。接客ロボット1の天板12には、上述したようにデータ処理端末13が内蔵される。 The customer service robot 1 is composed of a control unit 51, a moving unit 52, an elevating control unit 53, a camera 54, a sensor 55, a communication unit 56, and a power supply unit 57. As described above, the data processing terminal 13 is built in the top plate 12 of the customer service robot 1.
 制御部51は、コンピュータにより構成される。制御部51は、CPUにより所定のプログラムを実行し、接客ロボット1の全体の動作を制御する。制御部51は、移動体としての接客ロボット1の移動を含む、接客ロボット1の各種の動作を制御する制御装置としての機能を有する。 The control unit 51 is composed of a computer. The control unit 51 executes a predetermined program by the CPU and controls the overall operation of the customer service robot 1. The control unit 51 has a function as a control device that controls various operations of the customer service robot 1, including the movement of the customer service robot 1 as a moving body.
 移動部52は、モーターやギアを駆動させることによってタイヤを回転させ、接客ロボット1の移動を実現する。移動部52は、制御部51による制御に従って、移動の速度と移動の方向(並進速度と角速度)とを制御しながら接客ロボット1の移動を実現する移動部として機能する。 The moving unit 52 rotates the tires by driving the motor and gears, and realizes the movement of the customer service robot 1. The moving unit 52 functions as a moving unit that realizes the movement of the customer service robot 1 while controlling the moving speed and the moving direction (translation speed and angular velocity) according to the control by the control unit 51.
 昇降制御部53は、モーターやギアを駆動させることによって支持アーム25の伸縮を制御する。 The elevating control unit 53 controls the expansion and contraction of the support arm 25 by driving a motor and gears.
 カメラ54は、距離画像を撮影する図4のデプスカメラ23、RGB画像を撮影するRGBカメラ、IR画像を撮影するIRカメラなどにより構成される。カメラ54により撮影された画像は制御部51に出力される。 The camera 54 is composed of the depth camera 23 of FIG. 4 that captures a distance image, an RGB camera that captures an RGB image, an IR camera that captures an IR image, and the like. The image taken by the camera 54 is output to the control unit 51.
 センサ55は、加速度センサ、ジャイロセンサ、人感センサ、移動部52に設けられるタイヤの回転量を検出するエンコーダ、LiDAR24などの各種のセンサにより構成される。センサ55によるセンシング結果を表す情報は制御部51に出力される。 The sensor 55 is composed of various sensors such as an acceleration sensor, a gyro sensor, a motion sensor, an encoder provided in the moving portion 52 for detecting the amount of rotation of the tire, and LiDAR24. Information representing the sensing result by the sensor 55 is output to the control unit 51.
 カメラ54とセンサ55のうちの少なくともいずれかが接客ロボット1の外部に設けられるようにしてもよい。この場合、接客ロボット1の外部に設けられたカメラ54により撮影された画像、センサ55によるセンシング結果を表す情報は、無線通信を介して接客ロボット1に対して送信される。 At least one of the camera 54 and the sensor 55 may be provided outside the customer service robot 1. In this case, the image taken by the camera 54 provided outside the customer service robot 1 and the information representing the sensing result by the sensor 55 are transmitted to the customer service robot 1 via wireless communication.
 通信部56は、制御装置71との間で無線通信を行う。通信部56は、例えば、制御装置71から送信されてきた情報を受信し、制御部51に出力する。 The communication unit 56 performs wireless communication with the control device 71. The communication unit 56 receives, for example, the information transmitted from the control device 71 and outputs the information to the control unit 51.
 電源部57は、バッテリを有している。電源部57は、接客ロボット1の各部に対して電源を供給する。 The power supply unit 57 has a battery. The power supply unit 57 supplies power to each unit of the customer service robot 1.
 制御装置71は、PCなどのデータ処理装置により構成される。制御装置71は、それぞれの接客ロボット1の行動を制御する上位システムとして機能する。例えば制御装置71により、ユーザの探索を開始することなどがそれぞれの接客ロボット1に対して指示される。 The control device 71 is composed of a data processing device such as a PC. The control device 71 functions as a higher-level system that controls the behavior of each customer service robot 1. For example, the control device 71 instructs each customer service robot 1 to start searching for a user.
 図11は、制御部51の機能構成例を示すブロック図である。 FIG. 11 is a block diagram showing a functional configuration example of the control unit 51.
 図11に示す機能部のうちの少なくとも一部は、制御部51を構成するコンピュータのCPUにより所定のプログラムが実行されることによって実現される。 At least a part of the functional units shown in FIG. 11 is realized by executing a predetermined program by the CPU of the computer constituting the control unit 51.
 制御部51においては、環境センシング部101、自己位置同定部102、障害物地図生成部103、アプリケーション部104、軌道計画部105、到着判定部106、軌道追従部107、および移動制御部108が実現される。軌道追従部107は、AI軌道追従部111と監視部112から構成される。 In the control unit 51, the environment sensing unit 101, the self-position identification unit 102, the obstacle map generation unit 103, the application unit 104, the trajectory planning unit 105, the arrival determination unit 106, the trajectory tracking unit 107, and the movement control unit 108 are realized. Will be done. The orbit following unit 107 is composed of an AI orbit following unit 111 and a monitoring unit 112.
 環境センシング部101は、カメラ54、センサ55を駆動させることによって、接客ロボット1の周囲の環境のセンシングを行う。環境のセンシング結果を表す環境情報は、自己位置同定部102と障害物地図生成部103に供給される。 The environment sensing unit 101 senses the environment around the customer service robot 1 by driving the camera 54 and the sensor 55. Environmental information representing the environmental sensing result is supplied to the self-position identification unit 102 and the obstacle map generation unit 103.
 環境情報には、例えば、デプスカメラ23により撮影された距離画像、LiDAR24により計測された、対象物までの距離情報が含まれる。エンコーダにより検出されたタイヤの回転量を表す情報などの、センサ55を構成する各センサによる計測結果を表す情報が環境情報に含まれるようにしてもよい。 The environmental information includes, for example, a distance image taken by the depth camera 23 and distance information to an object measured by LiDAR24. The environmental information may include information representing the measurement results of each sensor constituting the sensor 55, such as information representing the amount of rotation of the tire detected by the encoder.
 自己位置同定部102は、環境センシング部101から供給された環境情報に基づいて、接客ロボット1の現在位置である自己位置を同定する。自己位置の同定は、デッドレコニングにより、または、スターレコニングにより行われる。 The self-position identification unit 102 identifies the self-position, which is the current position of the customer service robot 1, based on the environmental information supplied from the environment sensing unit 101. Identification of self-position is performed by dead reckoning or dead reckoning.
 デッドレコニングは、車軸エンコーダやIMU(Inertial Measurement Unit)といった、ロボット内部のセンサの出力を使用して自己位置を推定する方式である。一方、スターレコニングは、カメラ54を用いたマーカ認識やLiDAR SLAMを用いた認識といった、外界の状況に基づいて自己位置を推定する方式である。接客ロボット1の位置に応じて、デッドレコニングとスターレコニングが切り替えられるようにしてもよい。 Dead reckoning is a method of estimating the self-position using the output of a sensor inside the robot, such as an axle encoder or IMU (Inertial Measurement Unit). On the other hand, star reckoning is a method of estimating the self-position based on the situation of the outside world, such as marker recognition using the camera 54 and recognition using LiDAR SLAM. Dead reckoning and star reckoning may be switched according to the position of the customer service robot 1.
 自己位置の同定結果を表す自己位置情報を含む自機情報は、到着判定部106と軌道追従部107に供給される。自機情報には、適宜、接客ロボット1の速度を表す情報も含まれる。 The self-machine information including the self-position information representing the self-position identification result is supplied to the arrival determination unit 106 and the trajectory tracking unit 107. The own machine information also includes information indicating the speed of the customer service robot 1 as appropriate.
 障害物地図生成部103は、環境センシング部101から供給された環境情報に基づいて、周囲に存在する障害物の配置状況を表す地図である障害物地図を生成する。周囲に存在する障害物にはユーザも含まれる。障害物地図生成部103により生成された障害物地図を表す障害物地図情報は、軌道追従部107に供給される。 The obstacle map generation unit 103 generates an obstacle map, which is a map showing the arrangement status of obstacles existing in the surroundings, based on the environmental information supplied from the environment sensing unit 101. Obstacles in the surroundings include users. The obstacle map information representing the obstacle map generated by the obstacle map generation unit 103 is supplied to the trajectory tracking unit 107.
 アプリケーション部104は、ユーザの探索を行い、ターゲットとなるユーザを決定する。アプリケーション部104は、ターゲットのユーザの情報を軌道計画部105に出力し、移動の開始を指示する。軌道計画部105に対して出力される情報には、例えば、ターゲットとなるユーザまでの距離と方向の情報が含まれる。 The application unit 104 searches for users and determines a target user. The application unit 104 outputs the information of the target user to the trajectory planning unit 105 and instructs the start of the movement. The information output to the trajectory planning unit 105 includes, for example, information on the distance and direction to the target user.
 軌道計画部105は、ターゲットとなるユーザの位置をゴール位置として設定し、ゴール位置までの軌道を計画する。上述したように、軌道計画は、接客ロボット1の位置とターゲットとなるユーザの位置とを直線で結び、直線上の各位置における座標と速度を設定するようにして行われる。軌道計画部105は、計画した軌道を表す軌道情報を到着判定部106に出力する。 The trajectory planning unit 105 sets the position of the target user as the goal position and plans the trajectory to the goal position. As described above, the trajectory plan is performed by connecting the position of the customer service robot 1 and the position of the target user with a straight line and setting the coordinates and the speed at each position on the straight line. The track planning unit 105 outputs track information representing the planned track to the arrival determination unit 106.
 到着判定部106は、ゴールに到着したか否かを、自己位置同定部102から供給された自己位置情報により表される自己位置に基づいて判定する。 The arrival determination unit 106 determines whether or not the goal has been reached based on the self-position represented by the self-position information supplied from the self-position identification unit 102.
 図12は、ゴール位置の近傍に設定された領域の例を示す図である。 FIG. 12 is a diagram showing an example of a region set in the vicinity of the goal position.
 図12に示すように、ターゲットとなるユーザの手前の位置である位置P2を中心として、ゴール領域A1と緩和領域A2の2つの領域が設定される。 As shown in FIG. 12, two areas, a goal area A1 and a relaxation area A2, are set around the position P2 which is a position in front of the target user.
 ゴール領域A1は、その領域内に接客ロボット1が入った場合にゴールに到着したものとして判定する領域である。到着判定部106は、ゴール領域A1内に接客ロボット1が入った場合、ゴールに到着したものとして判定することになる。 The goal area A1 is an area determined as having arrived at the goal when the customer service robot 1 enters the area. When the customer service robot 1 enters the goal area A1, the arrival determination unit 106 determines that the robot 1 has arrived at the goal.
 一方、緩和領域A2は、その領域内に接客ロボット1が一定時間以上滞在した場合にゴールに到着したものとして判定する領域である。緩和領域A2は、ゴール領域A1を含むように、ゴール領域A1より広い領域として設定される。 On the other hand, the relaxation area A2 is an area determined as having reached the goal when the customer service robot 1 stays in the area for a certain period of time or longer. The relaxation region A2 is set as a region wider than the goal region A1 so as to include the goal region A1.
 到着判定部106は、緩和領域A2内に接客ロボット1が入った場合、滞在時間のカウントを開始する。到着判定部106は、緩和領域A2内に接客ロボット1が一定時間以上滞在した場合、ゴールに到着したものとして判定する。緩和領域A2は、ゴールに到着したものとして判定する条件を緩和した領域となる。 When the customer service robot 1 enters the relaxation area A2, the arrival determination unit 106 starts counting the staying time. When the customer service robot 1 stays in the relaxation area A2 for a certain period of time or longer, the arrival determination unit 106 determines that the robot 1 has arrived at the goal. The relaxation area A2 is an area in which the condition for determining that the goal has been reached is relaxed.
 また、緩和領域A2は、その領域内に接客ロボット1が入った場合に、並進速度の減速(低下)を開始する領域である。並進速度の減速は、例えば、接客ロボット1が緩和領域A2に入ってから一定時間経過後に開始される。 Further, the relaxation area A2 is an area where the translational speed is decelerated (decreased) when the customer service robot 1 enters the area. The deceleration of the translation speed is started, for example, after a certain period of time has elapsed from the customer service robot 1 entering the relaxation area A2.
 到着判定部106は、緩和領域A2内に接客ロボット1が入ってから一定時間が経過した場合、軌道計画部105から供給された軌道情報により表される並進速度を減速させ、調整後の並進速度の情報を含む軌道情報を軌道追従部107に出力する。 When a certain time has passed since the customer service robot 1 entered the relaxation area A2, the arrival determination unit 106 decelerates the translation speed represented by the trajectory information supplied from the trajectory planning unit 105, and adjusts the translation speed. The orbit information including the information of is output to the orbit following unit 107.
 このように、到着判定部106は、ゴールに到着したか否かを判定するとともに、軌道追従用モデルに対する入力となる、軌道計画部105から供給された軌道情報を接客ロボット1の状況に応じて調整する入力調整部としての機能を有する。 In this way, the arrival determination unit 106 determines whether or not the goal has been reached, and receives the orbit information supplied from the orbit planning unit 105, which is an input to the orbit tracking model, according to the situation of the customer service robot 1. It has a function as an input adjustment unit for adjustment.
 軌道追従部107のAI軌道追従部111は、上述した軌道追従用モデルを有している。AI軌道追従部111は、自己位置同定部102から供給された自機情報に含まれる自己位置情報、障害物地図生成部103から供給された障害物地図情報、および、到着判定部106から供給された軌道情報を軌道追従用モデルに入力する。AI軌道追従部111は、それぞれの情報を入力することに応じて軌道追従用モデルから出力された速度情報を監視部112に出力する。 The AI orbit following unit 111 of the orbit following unit 107 has the above-mentioned orbit following model. The AI trajectory tracking unit 111 is supplied from the self-position information included in the self-machine information supplied from the self-position identification unit 102, the obstacle map information supplied from the obstacle map generation unit 103, and the arrival determination unit 106. The orbit information is input to the orbit tracking model. The AI trajectory tracking unit 111 outputs the speed information output from the trajectory tracking model to the monitoring unit 112 in response to inputting each information.
 AI軌道追従部111は、自己位置情報、障害物地図情報、および軌道情報を入力情報として軌道追従用モデルに入力し、軌道追従用モデルを用いて、出力情報としての速度情報を生成する生成部として機能する。自己位置情報、障害物地図情報、および軌道情報の全てが入力情報として用いられるのではなく、障害物地図情報と軌道情報が入力情報として用いられるようにしてもよいし、自己位置情報、障害物地図情報、および軌道情報とは異なる情報が入力情報として用いられるようにしてもよい。 The AI trajectory tracking unit 111 inputs self-position information, obstacle map information, and trajectory information to the trajectory tracking model as input information, and uses the trajectory tracking model to generate velocity information as output information. Functions as. Instead of using all of the self-position information, the obstacle map information, and the orbit information as input information, the obstacle map information and the orbit information may be used as the input information, or the self-position information and the obstacle. Information different from the map information and the orbit information may be used as the input information.
 監視部112は、軌道追従用モデルの出力としてAI軌道追従部111から供給された速度情報を調整する。監視部112による速度情報の調整は、自己位置同定部102から供給された自機情報に含まれる自己位置情報により表される自己位置から、ゴール位置までの距離に応じて行われる。速度情報の調整は、例えば、AI軌道追従部111から供給された速度情報により表される並進速度と角速度のうちの、角速度を減速(低下)させるようにして行われる。 The monitoring unit 112 adjusts the speed information supplied from the AI trajectory tracking unit 111 as the output of the trajectory tracking model. The speed information is adjusted by the monitoring unit 112 according to the distance from the self-position represented by the self-position information included in the self-position information supplied from the self-position identification unit 102 to the goal position. The velocity information is adjusted, for example, by decelerating (decreasing) the angular velocity of the translational velocity and the angular velocity represented by the velocity information supplied from the AI trajectory following unit 111.
 図13は、ゴール位置の近傍に設定された領域の例を示す図である。 FIG. 13 is a diagram showing an example of a region set in the vicinity of the goal position.
 図13に示すように、ターゲットとなるユーザの手前の位置である位置P2を中心として、減速終了領域A11と減速開始領域A12の2つの領域が設定される。 As shown in FIG. 13, two areas, a deceleration end area A11 and a deceleration start area A12, are set around the position P2 which is a position in front of the target user.
 減速終了領域A11は、その領域内に接客ロボット1が入った場合に角速度の減速を終了する領域である。 The deceleration end area A11 is an area where the deceleration of the angular velocity ends when the customer service robot 1 enters the area.
 監視部112は、接客ロボット1が減速終了領域A11内に入った場合、接客ロボット1が減速開始領域A12に入ったときに開始した角速度の減速を終了させる。角速度の減速は、減速終了領域A11の外側であり、減速開始領域A12の内側の領域に接客ロボット1が入っている場合に行われる。 When the customer service robot 1 enters the deceleration end area A11, the monitoring unit 112 ends the deceleration of the angular velocity that started when the customer service robot 1 enters the deceleration start area A12. The deceleration of the angular velocity is performed when the customer service robot 1 is outside the deceleration end region A11 and inside the deceleration start region A12.
 角速度の減速が終了することにより、接客ロボット1の角速度は0となる。接客ロボット1は、ターゲットとなるユーザに正対した状態のまま、減速終了領域A11内を直進することになる。 When the deceleration of the angular velocity is completed, the angular velocity of the customer service robot 1 becomes 0. The customer service robot 1 goes straight in the deceleration end area A11 while facing the target user.
 一方、減速開始領域A12は、その領域内に接客ロボット1が入った場合に角速度の減速を開始する領域である。減速開始領域A12は、減速終了領域A11を含むように、減速終了領域A11より広い領域として設定される。図12の緩和領域A2と同じ領域として減速開始領域A12が設定されるようにしてもよいし、異なる領域として減速開始領域A12が設定されるようにしてもよい。 On the other hand, the deceleration start area A12 is an area where the deceleration of the angular velocity is started when the customer service robot 1 enters the area. The deceleration start region A12 is set as a region wider than the deceleration end region A11 so as to include the deceleration end region A11. The deceleration start region A12 may be set as the same region as the relaxation region A2 of FIG. 12, or the deceleration start region A12 may be set as a different region.
 監視部112は、接客ロボット1が減速開始領域A12に入った場合、角速度の減速を開始する。角速度の減速は、ターゲットとなるユーザに対して接客ロボット1が正対した状態のまま近づくように、軌道追従用モデルが出力する速度情報により表される角速度を打ち消すようにして行われる。 When the customer service robot 1 enters the deceleration start area A12, the monitoring unit 112 starts decelerating the angular velocity. The deceleration of the angular velocity is performed so as to cancel the angular velocity represented by the velocity information output by the trajectory tracking model so that the customer service robot 1 approaches the target user while facing the target user.
 監視部112は、調整後の角速度の情報を含む速度情報を移動制御部108に出力する。監視部112が出力する速度情報には、監視部112により調整された調整後の角速度を表す情報と、AI軌道追従部111の軌道追従用モデルが出力する並進速度を表す情報が含まれる。 The monitoring unit 112 outputs speed information including the adjusted angular velocity information to the movement control unit 108. The speed information output by the monitoring unit 112 includes information representing the adjusted angular velocity adjusted by the monitoring unit 112 and information representing the translational speed output by the trajectory tracking model of the AI trajectory tracking unit 111.
 このように、監視部112は、軌道追従用モデルの出力を監視し、軌道追従用モデルから出力された速度情報を接客ロボット1の状況に応じて調整する出力調整部としての機能を有する。 In this way, the monitoring unit 112 has a function as an output adjusting unit that monitors the output of the trajectory tracking model and adjusts the speed information output from the trajectory tracking model according to the situation of the customer service robot 1.
 移動制御部108は、監視部112から供給された速度情報に従って移動部52を制御し、接客ロボット1を移動させる。監視部112により調整された調整後の角速度と、AI軌道追従部111の軌道追従用モデルが出力する並進速度に応じて移動するように、接客ロボット1の移動が移動制御部108により制御される。 The movement control unit 108 controls the movement unit 52 according to the speed information supplied from the monitoring unit 112, and moves the customer service robot 1. The movement of the customer service robot 1 is controlled by the movement control unit 108 so as to move according to the adjusted angular velocity adjusted by the monitoring unit 112 and the translational speed output by the trajectory tracking model of the AI trajectory tracking unit 111. ..
 以上のような制御部51の構成は、ゴールに到着したものとして判定する条件を、ゴール位置までの距離などの接客ロボット1の状況に応じて到着判定部106において緩和する構成となる。 The configuration of the control unit 51 as described above is such that the condition of determining that the robot has arrived at the goal is relaxed by the arrival determination unit 106 according to the situation of the customer service robot 1 such as the distance to the goal position.
 また、制御部51の構成は、軌道追従用モデルの入力となる速度情報を、接客ロボット1の状況に応じて到着判定部106において調整する構成となる。 Further, the configuration of the control unit 51 is such that the speed information input to the trajectory tracking model is adjusted by the arrival determination unit 106 according to the situation of the customer service robot 1.
 さらに、制御部51の構成は、軌道追従用モデルの出力となる速度情報を、接客ロボット1の状況に応じて監視部112において調整する構成となる。 Further, the configuration of the control unit 51 is such that the speed information output from the trajectory tracking model is adjusted by the monitoring unit 112 according to the situation of the customer service robot 1.
<接客ロボット1の動作>
 ここで、以上のような構成を有する接客ロボット1の動作について説明する。
<Operation of customer service robot 1>
Here, the operation of the customer service robot 1 having the above configuration will be described.
・移動制御処理
 はじめに、図14のフローチャートを参照して、接客ロボット1の移動制御処理について説明する。
-Movement control processing First, the movement control processing of the customer service robot 1 will be described with reference to the flowchart of FIG.
 図14に示す各ステップの処理は、その順に行われるだけでなく、適宜、他の処理と並行して、または、他の処理と前後して、繰り返し行われる。図15以降の各フローチャートに示す各ステップの処理についても同様である。 The processing of each step shown in FIG. 14 is not only performed in that order, but is appropriately repeated in parallel with other processing or before and after other processing. The same applies to the processing of each step shown in each flowchart after FIG.
 ステップS1において、環境センシング部101は、接客ロボット1の周囲の環境のセンシングを行う。 In step S1, the environment sensing unit 101 senses the environment around the customer service robot 1.
 ステップS2において、自己位置同定部102は、環境センシング部101から出力された環境情報に基づいて接客ロボット1の自己位置を同定する。 In step S2, the self-position identification unit 102 identifies the self-position of the customer service robot 1 based on the environmental information output from the environment sensing unit 101.
 ステップS3において、障害物地図生成部103は、環境センシング部101から出力された環境情報に基づいて、周囲に存在する障害物の配置状況を表す障害物地図を生成する。 In step S3, the obstacle map generation unit 103 generates an obstacle map showing the arrangement status of obstacles existing in the surroundings based on the environmental information output from the environment sensing unit 101.
 ステップS4において、アプリケーション部104は、周囲にいるユーザの探索を行う。 In step S4, the application unit 104 searches for users in the vicinity.
 ステップS5において、アプリケーション部104は、探索により見つかったユーザの中から、ターゲットとなる1人のユーザを決定する。 In step S5, the application unit 104 determines one target user from the users found by the search.
 ステップS6において、軌道計画部105は、アプリケーション部104から出力されたユーザの情報に基づいてゴール位置を設定し、自己位置からゴール位置までの軌道を計画する。計画された軌道を表す軌道情報は到着判定部106に出力される。 In step S6, the trajectory planning unit 105 sets the goal position based on the user information output from the application unit 104, and plans the trajectory from its own position to the goal position. The orbit information representing the planned orbit is output to the arrival determination unit 106.
 ステップS7において、到着判定処理#1,#2が到着判定部106により行われる。到着判定処理#1,#2により、ゴールに到着したか否かが判定される。ゴールに到着したと判定された場合、図14の移動制御処理は終了となる。到着判定処理#1,#2の詳細については、それぞれ、図15、図18のフローチャートを参照して後述する。 In step S7, arrival determination processes # 1 and # 2 are performed by the arrival determination unit 106. Arrival determination processes # 1 and # 2 determine whether or not the goal has been reached. When it is determined that the goal has been reached, the movement control process of FIG. 14 ends. Details of the arrival determination processes # 1 and # 2 will be described later with reference to the flowcharts of FIGS. 15 and 18, respectively.
 ステップS8において、軌跡追従処理が軌道追従部107により行われる。軌跡追従処理により、軌道追従用モデルから出力された速度情報が調整される。軌道追従処理の詳細については図19のフローチャートを参照して後述する。 In step S8, the trajectory tracking process is performed by the trajectory tracking unit 107. The trajectory tracking process adjusts the velocity information output from the trajectory tracking model. The details of the trajectory tracking process will be described later with reference to the flowchart of FIG.
 ステップS9において、移動制御部108は、調整後の速度情報に従って移動部52を制御し、接客ロボット1を移動させる。その後、ステップS1に戻り、以上の処理が繰り返し行われる。 In step S9, the movement control unit 108 controls the movement unit 52 according to the adjusted speed information to move the customer service robot 1. After that, the process returns to step S1 and the above processing is repeated.
・到着判定処理
 次に、図15のフローチャートを参照して、図14のステップS7において行われる到着判定処理#1について説明する。
-Arrival determination process Next, the arrival determination process # 1 performed in step S7 of FIG. 14 will be described with reference to the flowchart of FIG.
 図15に示す到着判定処理#1と、図18を参照して後述する到着判定処理#2が、到着判定部106により並行して行われる。 The arrival determination process # 1 shown in FIG. 15 and the arrival determination process # 2, which will be described later with reference to FIG. 18, are performed in parallel by the arrival determination unit 106.
 ステップS21において、到着判定部106は、自己位置同定部102から出力された自機情報を取得する。 In step S21, the arrival determination unit 106 acquires the own machine information output from the self-position identification unit 102.
 ステップS22において、到着判定部106は、自機情報に含まれる自己位置情報に基づいて、接客ロボット1が緩和領域A2(図12)に入っているか否かを判定する。 In step S22, the arrival determination unit 106 determines whether or not the customer service robot 1 is in the relaxation area A2 (FIG. 12) based on the self-position information included in the own machine information.
 接客ロボット1が緩和領域A2に入っていないとステップS22において判定した場合、ステップS23において、到着判定部106は、調整前の並進速度の情報を含む軌道情報を軌道追従部107に出力する。減速していない並進速度の情報を含む軌道情報が、例えば接客ロボット1が緩和領域A2に入っていないと初めて判定された場合に軌道追従部107に対して出力される。その後、ステップS21に戻り、以上の処理が繰り返される。 When it is determined in step S22 that the customer service robot 1 is not in the relaxation area A2, the arrival determination unit 106 outputs the trajectory information including the translation speed information before adjustment to the trajectory tracking unit 107 in step S23. The trajectory information including the translation speed information that is not decelerated is output to the trajectory tracking unit 107 when, for example, it is determined for the first time that the customer service robot 1 is not in the relaxation region A2. After that, the process returns to step S21, and the above processing is repeated.
 一方、接客ロボット1が緩和領域A2に入っているとステップS22において判定した場合、ステップS24において、到着判定部106は、接客ロボット1が緩和領域A2に入ってから一定時間が経過したか否かを判定する。 On the other hand, when it is determined in step S22 that the customer service robot 1 is in the relaxation area A2, in step S24, the arrival determination unit 106 determines whether or not a certain time has elapsed since the customer service robot 1 entered the relaxation area A2. To judge.
 接客ロボット1が緩和領域A2に入ってから一定時間が経過していないとステップS24において判定された場合、調整前の並進速度の情報を含む軌道情報がステップS23において出力された後、ステップS21以降の処理が繰り返される。 When it is determined in step S24 that a certain time has not passed since the customer service robot 1 entered the relaxation area A2, after the trajectory information including the translation speed information before adjustment is output in step S23, after step S21. Processing is repeated.
 接客ロボット1が緩和領域A2に入ってから一定時間が経過したとステップS24において判定した場合、ステップS25において、到着判定部106はゴール条件を緩和する。 When it is determined in step S24 that a certain time has elapsed since the customer service robot 1 entered the relaxation area A2, the arrival determination unit 106 relaxes the goal condition in step S25.
 例えば、図16に示すように、緩和領域A2に接客ロボット1が入っており、緩和領域A2内において並進速度の減速が十分に実現された場合にゴールに到着したものとして判定するように、ゴール条件が緩和される。 For example, as shown in FIG. 16, when the customer service robot 1 is in the relaxation area A2 and the translational speed is sufficiently decelerated in the relaxation area A2, it is determined that the goal has been reached. The conditions are relaxed.
 ステップS26において、到着判定部106は、並進速度の減速を開始する。例えば1未満の所定の係数(減速係数)を並進速度に乗算することによって、並進速度の減速が行われる。 In step S26, the arrival determination unit 106 starts decelerating the translation speed. For example, the translational speed is decelerated by multiplying the translational speed by a predetermined coefficient (deceleration coefficient) less than 1.
 図17は、並進速度の減速の例を示す図である。 FIG. 17 is a diagram showing an example of deceleration of translational speed.
 並進速度の減速は、例えば、軌道上の全ての位置における並進速度を対象として行われる。同じ減速係数を乗算するようにして並進速度の減速が行われることにより、緩和領域A2内の位置から、ターゲットとなるユーザU1の位置までの全体の並進速度が減速されることになる。 The translational speed is decelerated, for example, for the translational speed at all positions on the orbit. By decelerating the translational speed by multiplying the same deceleration coefficient, the entire translational speed from the position in the relaxation region A2 to the position of the target user U1 is decelerated.
 図15の説明に戻り、ステップS27において、到着判定部106は、調整後の並進速度の情報を含む軌道情報を軌道追従部107に出力する。 Returning to the description of FIG. 15, in step S27, the arrival determination unit 106 outputs the orbit information including the adjusted translation speed information to the orbit tracking unit 107.
 ステップS28において、到着判定部106は、自機情報に含まれる情報に基づいて、接客ロボット1の並進速度を検出する。 In step S28, the arrival determination unit 106 detects the translation speed of the customer service robot 1 based on the information included in the own machine information.
 ステップS29において、到着判定部106は、並進速度の減速が十分か否かを判定する。 In step S29, the arrival determination unit 106 determines whether or not the translational speed is sufficiently decelerated.
 並進速度の減速が十分ではないとステップS29において判定した場合、ステップS21に戻り、以上の処理が繰り返される。 If it is determined in step S29 that the translational speed is not sufficiently decelerated, the process returns to step S21 and the above process is repeated.
 一方、並進速度の減速が十分であるとステップS29において判定した場合、ステップS30において、到着判定部106は、ゴールに到着したものとして判定する。その後、接客ロボット1は、ターゲットとなるユーザの近傍で停止し、インタラクションのための動作を行う。 On the other hand, if it is determined in step S29 that the translational speed is sufficiently decelerated, the arrival determination unit 106 determines in step S30 that the goal has been reached. After that, the customer service robot 1 stops in the vicinity of the target user and performs an operation for interaction.
 以上の処理により、ゴール領域A1に入らないでも、緩和領域A2に入り、緩和領域A2内において並進速度の減速が十分に実現されたなどの所定の条件を満たす場合に、ゴールに到着したものとして判定される。 By the above processing, it is assumed that the goal has been reached when a predetermined condition is satisfied, such as entering the relaxation area A2 and sufficiently decelerating the translational speed in the relaxation area A2 even if the goal area A1 is not entered. It is judged.
 また、ターゲットとなるユーザに近づいている場合には、並進速度を減速させた速度情報が軌道追従用モデルに対して入力されることになる。入力となる並進速度が減速することにより、軌道追従用モデルから出力される並進速度も減速し、これにより、並進速度を減速させながらターゲットとなるユーザに近づくような移動が実現される。 Also, when approaching the target user, the speed information obtained by decelerating the translation speed will be input to the trajectory tracking model. By decelerating the translation speed that is the input, the translation speed output from the trajectory tracking model is also decelerated, and as a result, movement that approaches the target user while decelerating the translation speed is realized.
 設計者は、並進速度の減速を開始させる距離を仕様として設定しておくことにより、軌道追従用モデルを用いて接客ロボット1の移動を制御する場合であっても、ターゲットとなるユーザの近くで急に停止するような移動が行われるのを防ぐことが可能となる。 By setting the distance at which the translational speed deceleration is started as a specification, the designer can control the movement of the customer service robot 1 by using the trajectory tracking model, in the vicinity of the target user. It is possible to prevent a movement that causes a sudden stop.
 次に、図18のフローチャートを参照して、図14のステップS7において行われる到着判定処理#2について説明する。 Next, the arrival determination process # 2 performed in step S7 of FIG. 14 will be described with reference to the flowchart of FIG.
 ステップS41において、到着判定部106は、自己位置同定部102から出力された自機情報を取得する。 In step S41, the arrival determination unit 106 acquires the own machine information output from the self-position identification unit 102.
 ステップS42において、到着判定部106は、自機情報に含まれる自己位置情報に基づいて、接客ロボット1がゴール領域A1に入っているか否かを判定する。 In step S42, the arrival determination unit 106 determines whether or not the customer service robot 1 is in the goal area A1 based on the self-position information included in the own machine information.
 接客ロボット1がゴール領域A1に入っているとステップS42において判定した場合、ステップS43において、到着判定部106は、接客ロボット1の並進速度の減速が十分か否かを判定する。 When it is determined in step S42 that the customer service robot 1 is in the goal area A1, in step S43, the arrival determination unit 106 determines whether or not the translational speed of the customer service robot 1 is sufficiently decelerated.
 並進速度の減速が十分ではないとステップS43において判定した場合、ステップS41に戻り、以上の処理が繰り返される。ステップS42において接客ロボット1がゴール領域A1に入っていないと判定された場合も同様に、ステップS41に戻り、以上の処理が繰り返される。 If it is determined in step S43 that the translational speed is not sufficiently decelerated, the process returns to step S41 and the above process is repeated. Similarly, when it is determined in step S42 that the customer service robot 1 is not in the goal area A1, the process returns to step S41 and the above process is repeated.
 一方、並進速度の減速が十分であるとステップS43において判定した場合、ステップS44において、到着判定部106は、ゴールに到着したものとして判定する。その後、接客ロボット1は、ターゲットとなるユーザの近傍で停止し、インタラクションのための動作を行う。 On the other hand, if it is determined in step S43 that the translational speed is sufficiently decelerated, the arrival determination unit 106 determines in step S44 that the goal has been reached. After that, the customer service robot 1 stops in the vicinity of the target user and performs an operation for interaction.
・軌道追従処理
 次に、図19のフローチャートを参照して、図14のステップS8において行われる軌道追従処理について説明する。
-Orbit follow-up process Next, the trajectory follow-up process performed in step S8 of FIG. 14 will be described with reference to the flowchart of FIG.
 ステップS51において、軌道追従部107は、自己位置同定部102から出力された自機情報を取得する。 In step S51, the trajectory following unit 107 acquires the own machine information output from the self-position identification unit 102.
 ステップS52において、AI軌道追従部111は、自機情報に含まれる自己位置情報、障害物地図情報、および軌道情報を軌道追従用モデルに入力し、速度の推論を行う。軌道追従用モデルから出力された、推論結果の速度を表す速度情報は監視部112に出力される。 In step S52, the AI trajectory tracking unit 111 inputs the self-position information, the obstacle map information, and the trajectory information included in the own machine information into the trajectory tracking model, and infers the speed. The speed information representing the speed of the inference result output from the trajectory tracking model is output to the monitoring unit 112.
 ステップS53において、監視部112は、自己位置情報に基づいて、接客ロボット1が減速開始領域A12に入っているか否かを判定する。 In step S53, the monitoring unit 112 determines whether or not the customer service robot 1 is in the deceleration start area A12 based on the self-position information.
 接客ロボット1が減速開始領域A12に入っていないとステップS53において判定した場合、ステップS54において、監視部112は、軌道追従用モデルから出力された速度情報をそのまま移動制御部108に出力する。 When it is determined in step S53 that the customer service robot 1 is not in the deceleration start area A12, the monitoring unit 112 outputs the speed information output from the trajectory tracking model to the movement control unit 108 as it is in step S54.
 一方、接客ロボット1が減速開始領域A12に入っているとステップS53において判定した場合、ステップS55において、監視部112は、軌道追従用モデルから出力された角速度の減速を開始する。 On the other hand, if it is determined in step S53 that the customer service robot 1 is in the deceleration start area A12, in step S55, the monitoring unit 112 starts decelerating the angular velocity output from the trajectory tracking model.
 図20に示すように、減速開始領域A12の内側であり、減速終了領域A11の外側の領域に接客ロボット1が入っている場合に、角速度の減速が開始される。例えば1未満の所定の係数(減速係数)を角速度に乗算することによって、角速度の調整が行われる。 As shown in FIG. 20, when the customer service robot 1 is inside the deceleration start area A12 and outside the deceleration end area A11, the deceleration of the angular velocity is started. For example, the angular velocity is adjusted by multiplying the angular velocity by a predetermined coefficient (deceleration coefficient) less than 1.
 ステップS56において、監視部112は、調整後の角速度の情報を含む速度情報を移動制御部108に出力する。監視部112から出力される速度情報には、調整後の角速度の情報と、軌道追従用モデルから出力された並進速度の情報が含まれる。 In step S56, the monitoring unit 112 outputs the speed information including the adjusted angular velocity information to the movement control unit 108. The velocity information output from the monitoring unit 112 includes information on the adjusted angular velocity and information on the translational velocity output from the trajectory tracking model.
 ステップS57において、監視部112は、自己位置情報に基づいて、接客ロボット1が減速終了領域A11に入っているか否かを判定する。 In step S57, the monitoring unit 112 determines whether or not the customer service robot 1 is in the deceleration end area A11 based on the self-position information.
 接客ロボット1が減速終了領域A11に入っていないとステップS57において判定された場合、ステップS53に戻り、上述した処理が繰り返される。 If it is determined in step S57 that the customer service robot 1 is not in the deceleration end area A11, the process returns to step S53 and the above-mentioned process is repeated.
 一方、接客ロボット1が減速終了領域A11に入っているとステップS57において判定した場合、ステップS58において、監視部112は、角速度を0とした速度情報を移動制御部108に出力する。その後、図14のステップS8に戻り、それ以降の処理が行われる。 On the other hand, when it is determined in step S57 that the customer service robot 1 is in the deceleration end area A11, in step S58, the monitoring unit 112 outputs speed information with the angular velocity set to 0 to the movement control unit 108. After that, the process returns to step S8 of FIG. 14, and the subsequent processing is performed.
 以上の処理により、ターゲットとなるユーザに近づいている場合には、軌道追従用モデルが出力する角速度を打ち消した速度情報が移動制御部108に対して出力される。軌道追従用モデルが出力する角速度を打ち消すように角速度が調整されることにより、ターゲットとなるユーザを回避する移動が行われるのを防ぐことができ、これにより、正対した状態のままターゲットとなるユーザに近づくような移動が実現される。 When the target user is approached by the above processing, the velocity information that cancels the angular velocity output by the trajectory tracking model is output to the movement control unit 108. By adjusting the angular velocity so as to cancel the angular velocity output by the orbit-following model, it is possible to prevent the movement from avoiding the target user, thereby making the target face-to-face. The movement that approaches the user is realized.
 また、最終的には、角速度を0とし、ゴール位置に向けて直進するような移動が実現される。 Finally, the angular velocity is set to 0, and the movement is realized so as to go straight toward the goal position.
 このような角速度の調整に加えて、並進速度の調整が到着判定部106により行われるから、結果として、ターゲットとなるユーザに近づいている場合には、並進速度と各速度を減速しつつ、最終的には、正対した状態のまま停止するような移動が実現される。 In addition to such adjustment of the angular velocity, the translational velocity is adjusted by the arrival determination unit 106. As a result, when the target user is approaching, the translational velocity and each velocity are decelerated and finally. Specifically, the movement is realized so as to stop while facing each other.
 設計者は、減速を開始させる距離を仕様として設定しておくことにより、軌道追従用モデルを用いて接客ロボット1の移動を制御する場合であっても、ターゲットとなるユーザを回避するような移動が行われるのを防ぐことが可能となる。また、設計者は、ユーザに正対した状態のまま停止させることが可能となる。 By setting the distance to start deceleration as a specification, the designer can move so as to avoid the target user even when the movement of the customer service robot 1 is controlled by using the trajectory tracking model. Can be prevented from being done. In addition, the designer can stop the user while facing the user.
 以上の一連の処理により、AIを用いて接客ロボット1の移動を実現する場合において、その移動が所定の仕様の範囲内で行われるようにすることが可能となる。 Through the above series of processes, when the customer service robot 1 is moved by using AI, it is possible to make the movement within a predetermined specification.
 通常、AIを用いてロボットの動作を制御する場合において、設計者が期待しない動作が発生したときには軌道追従用モデルの再学習が必要となる。以上の処理により、再学習を行うことなく、接客ロボット1の移動が仕様の範囲内で行われるようにすることができる。 Normally, when controlling the movement of a robot using AI, it is necessary to relearn the trajectory tracking model when a movement that the designer does not expect occurs. By the above processing, the movement of the customer service robot 1 can be performed within the range of the specifications without performing re-learning.
<変形例>
 図11に示す制御部51の構成のうちの少なくとも一部が、図10の制御装置71において実現されるようにしてもよい。この場合、制御装置71からそれぞれの接客ロボット1に対しては、移動を制御するためのコマンドが送信される。それぞれの接客ロボット1は、制御装置71による制御に従って移動を行う。
<Modification example>
At least a part of the configuration of the control unit 51 shown in FIG. 11 may be realized in the control device 71 of FIG. In this case, the control device 71 transmits a command for controlling the movement to each customer service robot 1. Each customer service robot 1 moves according to the control by the control device 71.
 アンケートを求めるユーザをターゲットとしてゴール位置が設定されるものとしたが、他の位置がゴール位置として設定されるようにしてもよい。 The goal position was set targeting the user requesting the questionnaire, but other positions may be set as the goal position.
 並進速度と角速度を表す速度情報が軌道追従用モデルから出力されるものとしたが、障害物を避けながら軌道に追従して移動することを実現する、移動に関する各種の情報(出力情報)が出力されるようにしてもよい。例えば、加速度と移動方向を表す情報が出力情報として軌道追従用モデルから出力されるようにしてもよいし、単に、移動の方向だけを表す情報が出力情報として出力されるようにしてもよい。 Velocity information representing translational velocity and angular velocity is output from the orbit tracking model, but various information (output information) related to movement that realizes movement following the orbit while avoiding obstacles is output. It may be done. For example, information indicating the acceleration and the moving direction may be output as output information from the trajectory tracking model, or information representing only the moving direction may be output as output information.
 軌道追従用モデルに対する入力と軌道追従用モデルの出力を調整し、調整後の出力を用いてロボットの行動を制御する技術は、移動以外の各種のロボットの行動を制御する場合にも適用可能である。 The technology of adjusting the input to the trajectory tracking model and the output of the trajectory tracking model and controlling the robot behavior using the adjusted output can also be applied when controlling the behavior of various robots other than movement. is there.
 ロボットのAIが、機械学習によって生成されたニューラルネットワークの推論モデルにより実現されるものとしたが、所定の情報を入力とし、所定の出力を得る各種の推論モデルによって実現されるようにしてもよい。 The AI of the robot is assumed to be realized by the inference model of the neural network generated by machine learning, but it may be realized by various inference models that input predetermined information and obtain a predetermined output. ..
 接客ロボット1の移動をAIを用いて制御する場合について説明したが、上述した技術は、自動車、飛行機、他の形状のロボットなどの各種の移動体の移動に適用することが可能である。 The case where the movement of the customer service robot 1 is controlled by using AI has been described, but the above-mentioned technology can be applied to the movement of various moving objects such as automobiles, airplanes, and robots of other shapes.
・コンピュータの構成例
 上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、専用のハードウェアに組み込まれているコンピュータ、または汎用のパーソナルコンピュータなどに、プログラム記録媒体からインストールされる。
-Computer configuration example The above-mentioned series of processes can be executed by hardware or software. When a series of processes are executed by software, the programs constituting the software are installed from the program recording medium on a computer embedded in dedicated hardware or a general-purpose personal computer.
 図21は、上述した一連の処理をプログラムにより実行するコンピュータのハードウェアの構成例を示すブロック図である。 FIG. 21 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
 接客ロボット1の移動が制御装置71により制御される場合、図21に示すようなコンピュータにより制御装置71が構成される。 When the movement of the customer service robot 1 is controlled by the control device 71, the control device 71 is configured by a computer as shown in FIG.
 CPU(Central Processing Unit)1001、ROM(Read Only Memory)1002、RAM(Random Access Memory)1003は、バス1004により相互に接続されている。 The CPU (Central Processing Unit) 1001, the ROM (Read Only Memory) 1002, and the RAM (Random Access Memory) 1003 are connected to each other by the bus 1004.
 バス1004には、さらに、入出力インタフェース1005が接続されている。入出力インタフェース1005には、キーボード、マウスなどよりなる入力部1006、ディスプレイ、スピーカなどよりなる出力部1007が接続される。また、入出力インタフェース1005には、ハードディスクや不揮発性のメモリなどよりなる記憶部1008、ネットワークインタフェースなどよりなる通信部1009、リムーバブルメディア1011を駆動するドライブ1010が接続される。 An input / output interface 1005 is further connected to the bus 1004. An input unit 1006 including a keyboard and a mouse, and an output unit 1007 including a display and a speaker are connected to the input / output interface 1005. Further, the input / output interface 1005 is connected to a storage unit 1008 including a hard disk and a non-volatile memory, a communication unit 1009 including a network interface, and a drive 1010 for driving the removable media 1011.
 以上のように構成されるコンピュータでは、CPU1001が、例えば、記憶部1008に記憶されているプログラムを入出力インタフェース1005及びバス1004を介してRAM1003にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 1001 loads and executes the program stored in the storage unit 1008 into the RAM 1003 via the input / output interface 1005 and the bus 1004, thereby executing the above-mentioned series of processes. Is done.
 CPU1001が実行するプログラムは、例えばリムーバブルメディア1011に記録して、あるいは、ローカルエリアネットワーク、インターネット、デジタル放送といった、有線または無線の伝送媒体を介して提供され、記憶部1008にインストールされる。 The program executed by the CPU 1001 is recorded on the removable media 1011 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting, and installed in the storage unit 1008.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program in which processing is performed in chronological order in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
 本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In the present specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
 本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 The effects described in this specification are merely examples and are not limited, and other effects may be obtained.
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and processed jointly.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 In addition, each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when one step includes a plurality of processes, the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
・構成の組み合わせ例
 本技術は、以下のような構成をとることもできる。
-Example of combination of configurations This technology can also have the following configurations.
(1)
 軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、前記軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する前記入力情報を、移動体の状況に応じて調整する入力調整部と、
 調整後の前記入力情報を前記推論モデルに入力することによって前記出力情報を生成する生成部と、
 前記推論モデルを用いて生成された前記出力情報に基づいて前記移動体の移動を制御する移動制御部と
 を備える制御装置。
(2)
 前記移動体の位置からゴール位置までの前記軌道を計画する軌道計画部をさらに備える
 前記(1)に記載の制御装置。
(3)
 前記軌道計画部は、前記移動体の位置と前記ゴール位置とを直線で結ぶ前記軌道を計画する
 前記(2)に記載の制御装置。
(4)
 前記ゴール位置は、ターゲットとなる人物の位置である
 前記(2)または(3)に記載の制御装置。
(5)
 前記入力調整部は、前記ゴール位置までの距離に応じて、前記出力情報を調整する
 前記(2)乃至(4)のいずれに記載の制御装置。
(6)
 前記軌道に関する情報は、前記移動体の並進速度と、前記移動体の角速度とを含み、
 前記入力調整部は、前記移動体の並進速度を調整する
 前記(5)に記載の制御装置。
(7)
 前記入力調整部は、前記移動体の並進速度を低下させる
 前記(6)に記載の制御装置。
(8)
 前記入力調整部は、前記ゴール位置を中心とした所定の距離の範囲に前記移動体が一定時間滞在している場合、ゴールに到着したものとして判定し、前記推論モデルに対する前記入力情報の入力を停止する
 前記(2)乃至(7)のいずれかに記載の制御装置。
(9)
 前記推論モデルを用いて生成された前記出力情報を、前記ゴール位置までの距離に応じて調整する出力調整部をさらに備える
 前記(2)乃至(8)のいずれかに記載の制御装置。
(10)
 前記出力情報は、前記移動体の並進速度と、前記移動体の角速度とを含み、
 前記出力調整部は、前記移動体の角速度を調整する
 前記(9)に記載の制御装置。
(11)
 前記出力調整部は、前記ゴール位置を中心とした第1の距離の範囲に前記移動体が入った場合、前記移動体の角速度の減速を開始し、前記第1の距離の範囲の内側に設定された第2の距離の範囲に前記移動体が入った場合、前記移動体の角速度が0になるように調整する
 前記(10)に記載の制御装置。
(12)
 前記推論モデルは、前記軌道に関する情報と前記障害物に関する情報とを少なくとも用いた機械学習によって生成されたモデルであり、前記障害物を回避するための前記出力情報を出力する
 前記(1)乃至(11)のいずれかに記載の制御装置。
(13)
 制御装置が、
 軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、前記軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する前記入力情報を、移動体の状況に応じて調整し、
 調整後の前記入力情報を前記推論モデルに入力することによって前記出力情報を生成し、
 前記推論モデルを用いて生成された前記出力情報に基づいて前記移動体の移動を制御する
 制御方法。
(14)
 コンピュータに、
 軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、前記軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する前記入力情報を、移動体の状況に応じて調整し、
 調整後の前記入力情報を前記推論モデルに入力することによって前記出力情報を生成し、
 前記推論モデルを用いて生成された前記出力情報に基づいて前記移動体の移動を制御する
 処理を実行させるためのプログラム。
(1)
The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. Input adjustment unit and
A generation unit that generates the output information by inputting the adjusted input information to the inference model, and
A control device including a movement control unit that controls the movement of the moving body based on the output information generated by using the inference model.
(2)
The control device according to (1), further comprising a track planning unit for planning the trajectory from the position of the moving body to the goal position.
(3)
The control device according to (2), wherein the trajectory planning unit plans the trajectory connecting the position of the moving body and the goal position with a straight line.
(4)
The control device according to (2) or (3) above, wherein the goal position is the position of a target person.
(5)
The control device according to any one of (2) to (4), wherein the input adjusting unit adjusts the output information according to the distance to the goal position.
(6)
The information about the trajectory includes the translational velocity of the moving body and the angular velocity of the moving body.
The control device according to (5) above, wherein the input adjusting unit adjusts the translational speed of the moving body.
(7)
The control device according to (6) above, wherein the input adjusting unit reduces the translational speed of the moving body.
(8)
When the moving body stays in a range of a predetermined distance centered on the goal position for a certain period of time, the input adjusting unit determines that the moving body has arrived at the goal and inputs the input information to the inference model. The control device according to any one of (2) to (7) above.
(9)
The control device according to any one of (2) to (8), further comprising an output adjusting unit that adjusts the output information generated by using the inference model according to the distance to the goal position.
(10)
The output information includes the translational velocity of the moving body and the angular velocity of the moving body.
The control device according to (9) above, wherein the output adjusting unit adjusts the angular velocity of the moving body.
(11)
When the moving body enters the range of the first distance centered on the goal position, the output adjusting unit starts deceleration of the angular velocity of the moving body and sets it inside the range of the first distance. The control device according to (10) above, wherein when the moving body enters the range of the second distance, the angular velocity of the moving body is adjusted to 0.
(12)
The inference model is a model generated by machine learning using at least information about the orbit and information about the obstacle, and outputs the output information for avoiding the obstacle. 11) The control device according to any one of.
(13)
The control device
The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. And
The output information is generated by inputting the adjusted input information into the inference model.
A control method for controlling the movement of the moving body based on the output information generated by using the inference model.
(14)
On the computer
The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. And
The output information is generated by inputting the adjusted input information into the inference model.
A program for executing a process of controlling the movement of the moving body based on the output information generated by using the inference model.
 1-1乃至1-3 接客ロボット, 11 筐体, 12 天板, 13 データ処理端末, 21 本体部, 22-1乃至22-4 パネル, 23 デプスカメラ, 24 LiDAR, 25 支持アーム, 51 制御部, 52 移動部, 53 昇降制御部, 54 カメラ, 55 センサ, 56 通信部, 57 電源部, 71 制御装置, 101 環境センシング部, 102 自己位置同定部, 103 障害物地図生成部, 104 アプリケーション部, 105 軌道計画部, 106 到着判定部, 107 軌道追従部, 108 移動制御部, 111 AI軌道追従部, 112 監視部 1-1 to 1-3 Customer service robot, 11 housing, 12 top plate, 13 data processing terminal, 21 main body, 22-1 to 22-4 panel, 23 depth camera, 24 LiDAR, 25 support arm, 51 control unit , 52 moving unit, 53 elevating control unit, 54 camera, 55 sensor, 56 communication unit, 57 power supply unit, 71 control device, 101 environment sensing unit, 102 self-position identification unit, 103 obstacle map generation unit, 104 application unit, 105 orbit planning unit, 106 arrival judgment unit, 107 orbit tracking unit, 108 movement control unit, 111 AI orbit tracking unit, 112 monitoring unit

Claims (14)

  1.  軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、前記軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する前記入力情報を、移動体の状況に応じて調整する入力調整部と、
     調整後の前記入力情報を前記推論モデルに入力することによって前記出力情報を生成する生成部と、
     前記推論モデルを用いて生成された前記出力情報に基づいて前記移動体の移動を制御する移動制御部と
     を備える制御装置。
    The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. Input adjustment unit and
    A generation unit that generates the output information by inputting the adjusted input information to the inference model, and
    A control device including a movement control unit that controls the movement of the moving body based on the output information generated by using the inference model.
  2.  前記移動体の位置からゴール位置までの前記軌道を計画する軌道計画部をさらに備える
     請求項1に記載の制御装置。
    The control device according to claim 1, further comprising a trajectory planning unit for planning the trajectory from the position of the moving body to the goal position.
  3.  前記軌道計画部は、前記移動体の位置と前記ゴール位置とを直線で結ぶ前記軌道を計画する
     請求項2に記載の制御装置。
    The control device according to claim 2, wherein the trajectory planning unit plans the trajectory that connects the position of the moving body and the goal position with a straight line.
  4.  前記ゴール位置は、ターゲットとなる人物の位置である
     請求項2に記載の制御装置。
    The control device according to claim 2, wherein the goal position is the position of a target person.
  5.  前記入力調整部は、前記ゴール位置までの距離に応じて、前記入力情報を調整する
     請求項2に記載の制御装置。
    The control device according to claim 2, wherein the input adjusting unit adjusts the input information according to the distance to the goal position.
  6.  前記軌道に関する情報は、前記移動体の並進速度と、前記移動体の角速度とを含み、
     前記入力調整部は、前記移動体の並進速度を調整する
     請求項5に記載の制御装置。
    The information about the trajectory includes the translational velocity of the moving body and the angular velocity of the moving body.
    The control device according to claim 5, wherein the input adjusting unit adjusts the translational speed of the moving body.
  7.  前記入力調整部は、前記移動体の並進速度を低下させる
     請求項6に記載の制御装置。
    The control device according to claim 6, wherein the input adjusting unit reduces the translational speed of the moving body.
  8.  前記入力調整部は、前記ゴール位置を中心とした所定の距離の範囲に前記移動体が一定時間滞在している場合、ゴールに到着したものとして判定し、前記推論モデルに対する前記入力情報の入力を停止する
     請求項2に記載の制御装置。
    When the moving body stays in a range of a predetermined distance centered on the goal position for a certain period of time, the input adjusting unit determines that the moving body has arrived at the goal and inputs the input information to the inference model. The control device according to claim 2, which is stopped.
  9.  前記推論モデルを用いて生成された前記出力情報を、前記ゴール位置までの距離に応じて調整する出力調整部をさらに備える
     請求項2に記載の制御装置。
    The control device according to claim 2, further comprising an output adjusting unit that adjusts the output information generated by using the inference model according to the distance to the goal position.
  10.  前記出力情報は、前記移動体の並進速度と、前記移動体の角速度とを含み、
     前記出力調整部は、前記移動体の角速度を調整する
     請求項9に記載の制御装置。
    The output information includes the translational velocity of the moving body and the angular velocity of the moving body.
    The control device according to claim 9, wherein the output adjusting unit adjusts the angular velocity of the moving body.
  11.  前記出力調整部は、前記ゴール位置を中心とした第1の距離の範囲に前記移動体が入った場合、前記移動体の角速度の減速を開始し、前記第1の距離の範囲の内側に設定された第2の距離の範囲に前記移動体が入った場合、前記移動体の角速度が0になるように調整する
     請求項10に記載の制御装置。
    When the moving body enters the range of the first distance centered on the goal position, the output adjusting unit starts deceleration of the angular velocity of the moving body and sets it inside the range of the first distance. The control device according to claim 10, wherein when the moving body enters the range of the second distance, the angular velocity of the moving body is adjusted to 0.
  12.  前記推論モデルは、前記軌道に関する情報と前記障害物に関する情報とを少なくとも用いた機械学習によって生成されたモデルであり、前記障害物を回避するための前記出力情報を出力する
     請求項1に記載の制御装置。
    The inference model is a model generated by machine learning using at least information about the orbit and information about the obstacle, and the output information for avoiding the obstacle is output according to claim 1. Control device.
  13.  制御装置が、
     軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、前記軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する前記入力情報を、移動体の状況に応じて調整し、
     調整後の前記入力情報を前記推論モデルに入力することによって前記出力情報を生成し、
     前記推論モデルを用いて生成された前記出力情報に基づいて前記移動体の移動を制御する
     制御方法。
    The control device
    The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. And
    The output information is generated by inputting the adjusted input information into the inference model.
    A control method for controlling the movement of the moving body based on the output information generated by using the inference model.
  14.  コンピュータに、
     軌道に関する情報と障害物に関する情報とを少なくとも含む入力情報を入力とし、前記軌道に追従した移動に関する出力情報を出力する推論モデルに対して入力する前記入力情報を、移動体の状況に応じて調整し、
     調整後の前記入力情報を前記推論モデルに入力することによって前記出力情報を生成し、
     前記推論モデルを用いて生成された前記出力情報に基づいて前記移動体の移動を制御する
     処理を実行させるためのプログラム。
    On the computer
    The input information including at least the information about the trajectory and the information about the obstacle is input, and the input information input to the inference model that outputs the output information about the movement following the trajectory is adjusted according to the situation of the moving body. And
    The output information is generated by inputting the adjusted input information into the inference model.
    A program for executing a process of controlling the movement of the moving body based on the output information generated by using the inference model.
PCT/JP2020/012239 2019-04-02 2020-03-19 Control device, control method, and program WO2020203341A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019070641 2019-04-02
JP2019-070641 2019-04-02

Publications (1)

Publication Number Publication Date
WO2020203341A1 true WO2020203341A1 (en) 2020-10-08

Family

ID=72667650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/012239 WO2020203341A1 (en) 2019-04-02 2020-03-19 Control device, control method, and program

Country Status (1)

Country Link
WO (1) WO2020203341A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0850548A (en) * 1994-08-05 1996-02-20 Nikon Corp Method and device for learning path
JPH08221124A (en) * 1995-02-09 1996-08-30 Mitsubishi Electric Corp Track type self-travelling vehicle device
JP2007249632A (en) * 2006-03-16 2007-09-27 Fujitsu Ltd Mobile robot moving autonomously under environment with obstruction, and control method for mobile robot
JP2014209293A (en) * 2013-04-16 2014-11-06 富士ゼロックス株式会社 Route searching device, self-propelled working apparatus, program, and recording medium
US20180110326A1 (en) * 2016-10-21 2018-04-26 Robotis, Inc. Movable table
WO2019167511A1 (en) * 2018-02-28 2019-09-06 ソニー株式会社 Mobile body control device and mobile body control method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0850548A (en) * 1994-08-05 1996-02-20 Nikon Corp Method and device for learning path
JPH08221124A (en) * 1995-02-09 1996-08-30 Mitsubishi Electric Corp Track type self-travelling vehicle device
JP2007249632A (en) * 2006-03-16 2007-09-27 Fujitsu Ltd Mobile robot moving autonomously under environment with obstruction, and control method for mobile robot
JP2014209293A (en) * 2013-04-16 2014-11-06 富士ゼロックス株式会社 Route searching device, self-propelled working apparatus, program, and recording medium
US20180110326A1 (en) * 2016-10-21 2018-04-26 Robotis, Inc. Movable table
WO2019167511A1 (en) * 2018-02-28 2019-09-06 ソニー株式会社 Mobile body control device and mobile body control method

Similar Documents

Publication Publication Date Title
Rubio et al. A review of mobile robots: Concepts, methods, theoretical framework, and applications
US11363929B2 (en) Apparatus and methods for programming and training of robotic household appliances
Culler et al. A prototype smart materials warehouse application implemented using custom mobile robots and open source vision technology developed using emgucv
CN105425795B (en) Method and device for planning optimal following path
US11858148B2 (en) Robot and method for controlling the same
KR102018832B1 (en) Automatic driving cart
US20090234499A1 (en) System and method for seamless task-directed autonomy for robots
JP2019520953A (en) Automatic floor washer that can switch between manual operation and autonomous operation
JP2008142841A (en) Mobile robot
CN112401747B (en) Robot cleaner avoiding jamming situation through artificial intelligence and operation method thereof
JP2020079997A (en) Information processing apparatus, information processing method, and program
US11635759B2 (en) Method of moving robot in administrator mode and robot of implementing method
JP2020187483A (en) Autonomously moving body, control program for autonomously moving body, control method of autonomously moving body, and system server for remote control of autonomously moving body
KR20210021169A (en) Charging system for robot and control method thereof
US20220097785A1 (en) Robot
Ahn et al. PDA-based mobile robot system with remote monitoring for home environment
RU178222U1 (en) Mobile robot
US20210137438A1 (en) Control system for mobile robots
WO2020203342A1 (en) Control device, control method, and program
WO2020203341A1 (en) Control device, control method, and program
KR20210042537A (en) Method of estimating position in local area in large sapce and robot and cloud server implementing thereof
JP7075935B2 (en) Autonomous robot system
Tomatis et al. Building a fully autonomous tour guide robot: Where academic research meets industry
Olmedo et al. Mobile robot system architecture for people tracking and following applications
JP2020004182A (en) Robot, robot control program and robot control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20783263

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20783263

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP