WO2023079845A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2023079845A1
WO2023079845A1 PCT/JP2022/034884 JP2022034884W WO2023079845A1 WO 2023079845 A1 WO2023079845 A1 WO 2023079845A1 JP 2022034884 W JP2022034884 W JP 2022034884W WO 2023079845 A1 WO2023079845 A1 WO 2023079845A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
information
arbitrary
position information
mobile body
Prior art date
Application number
PCT/JP2022/034884
Other languages
French (fr)
Japanese (ja)
Inventor
陸也 江副
啓輔 前田
諒 渡辺
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2023079845A1 publication Critical patent/WO2023079845A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a program.
  • Patent Document 1 discloses a technique in which a robot with a small estimation error due to dead reckoning estimates the position of another robot with a large estimation error, and provides the estimated position information to the other robot.
  • the present disclosure proposes a new and improved information processing device, information processing method, and program capable of estimating the position of a mobile object with higher accuracy.
  • an acquisition unit that acquires a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of a plurality of moving objects; and an estimating unit for estimating position information of the arbitrary moving body by superimposing probability distributions of a plurality of position information of the moving bodies.
  • the computer has an acquisition function for acquiring a plurality of position information of an arbitrary moving body based on sensing information obtained by sensors mounted on each of a plurality of moving bodies, and the acquisition function and an estimating function of estimating the position information of the arbitrary moving object by superimposing the obtained probability distributions of the plurality of position information of the arbitrary moving object.
  • FIG. 1 is an explanatory diagram for explaining an example of an information processing system according to the present disclosure
  • FIG. It is an explanatory view for explaining an example of functional composition of an information processing system concerning this indication.
  • FIG. 4 is an explanatory diagram for explaining an example related to position estimation of a robot according to the present disclosure
  • FIG. 4 is an explanatory diagram for explaining an overview of how an estimation unit according to the first embodiment estimates position information of a robot
  • FIG. 7 is an explanatory diagram for explaining an example of operation processing for estimating position information of a robot by superimposing probability distributions according to the first embodiment
  • FIG. 11 is an explanatory diagram for explaining an example of correcting the position information of the robot according to the second embodiment
  • FIG. 11 is an explanatory diagram for explaining an example of operation processing for correcting position information of the robot according to the second embodiment;
  • FIG. 11 is an explanatory diagram for describing a modification of the robot according to the present disclosure; It is a block diagram showing the hardware configuration of the server.
  • robots are distinguished as needed such as a first robot 10A and a second robot 10B.
  • the robots are simply referred to as robots 10 when there is no need to distinguish between the robots.
  • FIG. 1 is an explanatory diagram for explaining an example of an information processing system according to the present disclosure.
  • An information processing system according to the present disclosure includes a network 1, a first robot 10A, a second robot 10B, and a server 20.
  • a network 1 is a wired or wireless transmission path for information transmitted from devices connected to the network 1 .
  • the network 1 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various LANs (Local Area Networks) including Ethernet (registered trademark), WANs (Wide Area Networks), and the like.
  • the network 1 may also include a dedicated line network such as IP-VPN (Internet Protocol-Virtual Private Network).
  • the first robot 10A and server 20 and the second robot 10B and server 20 are connected via network 1, respectively.
  • the robot 10 is an example of a mobile object and autonomously moves within a certain environment.
  • two robots 10 such as the first robot 10A and the second robot 10B are described as an example of the plurality of robots 10, but the number of the plurality of robots 10 is three. machine or more. Also, the number of robots 10 described in the second embodiment and modification may be one.
  • the robot 10 estimates its own position or the positions of other robots 10 based on the sensing information acquired by the sensors provided in the robot 10 .
  • the robot 10 may transmit sensing information acquired by a sensor to the server 20 .
  • the server 20 that has received the sensing information may estimate the position of the robot 10 based on the received sensing information. The details of the sensors included in the robot 10 will be described later.
  • the server 20 is an example of an information processing device, and acquires a plurality of pieces of position information of an arbitrary robot 10 based on sensing information obtained by each sensor included in each of the plurality of robots 10 .
  • the server 20 acquires the positional information of the first robot 10A and the positional information of the second robot 10B based on the sensing information obtained by the sensors of the first robot 10A.
  • the server 20 acquires the position information of the first robot 10A and the position information of the second robot 10B based on the sensing information obtained by the sensors of the second robot 10B.
  • the server 20 estimates the position information of any robot 10 by superimposing the probability distributions of a plurality of position information of any robot 10 .
  • the server 20 estimates the position information of the first robot 10A by superimposing the probability distributions of two pieces of position information of the first robot 10A acquired from each of the first robot 10A and the second robot 10B.
  • the server 20 estimates the position information of the second robot 10B by superimposing the probability distributions of the two pieces of position information of the second robot 10B acquired from each of the first robot 10A and the second robot 10B. You may
  • FIG. 2 is an explanatory diagram for explaining a functional configuration example of an information processing system according to the present disclosure.
  • the robot 10 includes a camera 110, an IMU (Inertial Measurement Unit) 120, a wheel encoder 130, a storage unit 140, a communication unit 150, and a control unit 160, as shown in FIG. .
  • IMU Inertial Measurement Unit
  • the camera 110 is an example of an external sensor, and is a device that acquires sensing information including an image by photographing.
  • the camera 110 acquires sensing information including distance information from the camera 110 to the other robot 10 or the Alvar marker by photographing the other robot 10 or the Alvar marker.
  • the camera 110 is preferably a camera without an infrared LED and an IR cut filter, in order to acquire the time when the camera 110 captured the image.
  • the Alvar marker is an example of a feature. Also, in the following description, the Alvar marker may simply be referred to as a marker.
  • the IMU 120 is an example of an internal sensor, and is a device that detects inertial motion of the robot 10 .
  • the IMU 120 acquires sensing information including angular acceleration information of the wheels of the robot 10 when the robot 10 moves.
  • the wheel encoder 130 is an example of an internal sensor, and acquires sensing information including vehicle speed pulses of the wheels of the robot 10 when the robot 10 moves.
  • the storage unit 140 holds software and various data.
  • the storage unit 140 holds map information of the environment in which the robot 10 moves and information related to the self-position of the robot 10 .
  • the storage unit 140 also holds position information of markers placed in the environment in which the robot 10 moves.
  • the storage unit 140 may hold information related to the target route in the environment.
  • the communication unit 150 performs various communications with the communication unit 210 included in the server 20. For example, the communication unit 150 transmits information regarding the self-position of the robot 10 estimated by the estimation unit 161 to the server 20 . Also, the communication unit 150 may transmit the position information of the other robot 10 estimated by the estimation unit 161 to the server 20 .
  • the communication unit 150 may transmit various sensing information obtained by the camera 110 , the IMU 120 and the wheel encoder 130 to the server 20 .
  • the control unit 160 controls the overall operation of the robot 10.
  • the control unit 160 according to the present disclosure includes an estimation unit 161 as shown in FIG. 2 .
  • the estimating unit 161 estimates the position of the robot 10 based on sensing information obtained by sensors provided in the robot 10 .
  • the estimator 161 estimates the self-position of the robot 10 based on the angular velocity information obtained by the IMU 120 and the vehicle speed pulse obtained by the wheel encoder 130 .
  • a method of estimating the self-position of the robot 10 using such an internal sensor is called dead reckoning.
  • the estimation unit 161 estimates the self-position of the robot 10 based on the distance information from the camera 110 to the marker obtained by the camera 110 photographing the marker. Further, the estimation unit 161 may estimate the position of the other robot 10 based on distance information from the camera 110 to the other robot 10 obtained by photographing the other robot 10 with the camera 110 . A method of estimating the position information of the robot 10 using such an external sensor is called star reckoning.
  • the information related to the self-position of the robot 10 estimated by the estimation unit 161 and the position information of the other robots 10 each have an error circle according to a probability distribution.
  • the probability distribution according to the present disclosure is an example of Gaussian distribution.
  • the server 20 includes a communication unit 210 and a control unit 220, as shown in FIG.
  • the communication unit 210 performs various communications with the robot 10.
  • the communication unit 210 is an example of an acquisition unit and receives position information of an arbitrary robot 10 from a plurality of robots 10 .
  • the communication unit 210 receives information regarding the self-position of the first robot 10A and position information of the second robot 10B from the first robot 10A. Further, the communication unit 210 receives information related to the self-position of the second robot 10B and position information of the first robot 10A from the second robot 10B.
  • the communication unit 210 may receive various sensing information from a plurality of robots 10 .
  • the communication unit 210 receives various sensing information obtained by the camera 110, the IMU 120, and the wheel encoder 130 of each of the first robot 10A and the second robot 10B from the first robot 10A and the second robot 10B. may receive.
  • the control unit 220 controls the overall operation of the server 20.
  • the control unit 220 includes an estimation unit 221 and a correction unit 225, as shown in FIG.
  • the estimating unit 221 estimates the position of an arbitrary robot 10 by superimposing the probability distributions of a plurality of pieces of positional information of the arbitrary robot 10 received by the communication unit 210 . Details will be described later.
  • the estimation unit 221 may be an acquisition unit.
  • the communication unit 210 may receive various sensing information from the robot 10, and the estimating unit 221 may estimate position information of any robot 10 based on the sensing information.
  • the estimation unit 221 estimates the position information of the first robot 10A based on various sensing information received by the communication unit 210 from the first robot 10A.
  • the estimation unit 221 may estimate the position information of the first robot 10A based on various sensing information received by the communication unit 210 from the second robot 10B.
  • the estimation unit 221 may estimate the position information of the arbitrary robot 10 by superimposing the probability distributions of the estimated plurality of position information of the arbitrary robot 10 .
  • the correction unit 225 corrects the position information of any robot 10 to the position information estimated by the estimation unit 221 .
  • the correction unit 225 causes the communication unit 210 to transmit correction information based on the position information estimated by the estimation unit 221 .
  • FIG. 3 is an explanatory diagram for explaining an example related to position estimation of the robot 10 according to the present disclosure.
  • the robot 10 according to the present disclosure autonomously moves within an environment (for example, a factory or the like). At this time, the robot 10 moves in the environment while estimating its own position by constantly performing the above-described dead reckoning.
  • dead reckoning can cause errors in estimating the self-position due to disturbances such as wheel slippage of the robot 10 .
  • the self-position of the robot 10 has, for example, a Gaussian distribution error circle (hereinafter sometimes simply referred to as an error circle), and the error circle becomes larger as the movement distance increases. obtain.
  • the robot 10 estimates its own position by performing star reckoning based on sensing information obtained by photographing the marker MA placed at a predetermined position. Since the markers are placed at predetermined positions, the robot 10 can estimate its own position with higher accuracy than dead reckoning. Therefore, it is desirable that at least one or more markers MA are installed in the environment in which the robot 10 moves.
  • the error circle of the self-position of the robot 10 is also reduced. can be.
  • the first robot 10A and the second robot 10B autonomously move in the environment while estimating their own position by dead reckoning. Then, the first robot 10A, which has approached the imaging range of the marker MA, further estimates its own position by star reckoning.
  • the error circle C2 possessed by the second robot 10B can be larger than the circle C1.
  • the plurality of robots 10 move in a certain environment while estimating their own positions by dead reckoning. After reaching the imaging range of the marker MA, the robot 10 further estimates its own position by star reckoning. In this way, the robot 10 can move within the environment while maintaining the estimation accuracy of the self-position by updating the self-position as needed. On the other hand, there are cases where the area and the number of markers that can be placed in the environment are limited.
  • the robot 10 may use another robot 10 as a star reckoning marker.
  • the second robot 10B may estimate the position of the first robot 10A based on sensing information obtained by photographing the first robot 10A.
  • the second robot 10B may then transmit the estimated position information of the first robot 10A to the server 20 .
  • the first robot 10A may transmit to the server 20 information related to its own position estimated by dead reckoning.
  • the position of the other robot 10 estimated by the robot 10 includes an error circle resulting from star reckoning in addition to the error circle associated with the self-position estimated by the robot 10 through dead reckoning.
  • the estimating unit 221 included in the server 20 is based on the information related to the self-position of the first robot 10A received from the first robot 10A and the position information of the first robot 10A received from the second robot 10B. , to estimate the position information of the first robot 10A.
  • a specific example of estimating the position information of the robot 10 by the estimation unit 221 according to the first embodiment will be described below with reference to FIG. Note that the first embodiment, the second embodiment, and the modifications described below may be executed in combination with each other, or only one of them may be executed without being combined.
  • FIG. 4 is an explanatory diagram for explaining an overview of how the estimation unit 221 according to the first embodiment estimates the position information of the robot 10.
  • the self-position estimated by the first robot 10A has an error circle C1 of Gaussian distribution.
  • the position of the first robot 10A estimated by the second robot 10B by star reckoning has an error circle C3 of Gaussian distribution.
  • the estimation unit 221 estimates the position information of the first robot 10A by, for example, superimposing each Gaussian distribution of the position information of the first robot 10A received from the first robot 10A and the second robot 10B.
  • the position information of the first robot 10A estimated by superposition of Gaussian distributions has an error circle C4.
  • the error circle C4 of the position information of the first robot 10A estimated by superimposing Gaussian distributions can be smaller than the error circles C1 and C3.
  • the second robot acquires an image of the first robot 10A with the camera 110 of the second robot 10B, and estimates the position information of the first robot 10A from the image. Further, the second robot 10B holds the time when the image used for estimating the position information of the first robot 10A was taken. Then, the second robot 10B transmits to the server 20 the position information of the first robot 10A and the information about the shooting time.
  • the communication unit 150 may transmit, for example, the time in milliseconds and the parity bit or checksum through LED communication. Further, the communication unit 150 may transmit the position information of the first robot 10A using another communication standard such as WiFi (registered trademark).
  • WiFi registered trademark
  • the estimation unit 221 included in the server 20 stores the Gaussian distribution of the first robot 10A estimated by the second robot by star reckoning and The position information of the first robot 10A may be estimated by superimposing the Gaussian distribution of the self-position estimated by the first robot 10A by dead reckoning.
  • the error circle C4 of the position information of the first robot 10A estimated by superimposing the Gaussian distributions can be smaller than the error circle C1 of the self-position recognized by the first robot 10A.
  • the second robot 10B may transmit the estimated position information of the first robot 10A to the server 20, and may transmit the time when the first robot 10A was photographed to the first robot 10A.
  • the first robot 10A may transmit to the server 20 information related to its own position estimated by dead reckoning at the same time as when the image was taken by the second robot 10B.
  • the estimation unit 221 superimposes the Gaussian distribution of the position information of the second robot 10B received from the first robot 10A and the second robot 10B in the same manner as the method described above, thereby obtaining the position information of the second robot 10B.
  • location information may be estimated.
  • the estimation unit 221 has a Gaussian distribution included in the position information of the second robot 10B estimated by the first robot 10A by star reckoning, and the information related to the self-position estimated by the second robot 10B by dead reckoning.
  • the position information of the second robot 10B may be estimated by superimposing the Gaussian distribution.
  • the estimating unit 221 can reduce the error circle included in the self-position information recognized by both the first robot 10A and the second robot 10B.
  • FIG. 5 is an explanatory diagram for explaining an example of operation processing for estimating position information of the robot 10 by superimposing probability distributions according to the first embodiment.
  • the first robot 10A moves in the environment while estimating its own position by dead reckoning (S101).
  • the second robot 10B moves in the environment while estimating its own position by dead reckoning (S105).
  • the first robot 10A and the second robot 10B are within a predetermined distance, the first robot 10A takes an image of the second robot 10B (S109).
  • the predetermined distance may be any distance at which the first robot 10A can photograph the second robot 10B.
  • the second robot 10B takes an image of the first robot 10A when the first robot 10A and the second robot 10B are within a predetermined distance (S113). .
  • the first robot 10A estimates position information of the second robot 10B by star reckoning based on sensing information obtained by photographing the second robot 10B (S117).
  • the second robot 10B estimates position information of the first robot 10A by star reckoning based on sensing information obtained by photographing the first robot 10A (S121).
  • the first robot 10A transmits to the server 20 the information related to its own position and the position information of the second robot 10B estimated by star reckoning.
  • the first robot 10A transmits to the server 20 the information about the shooting time of the image used for estimating the position information of the second robot 10B (S125).
  • the second robot 10B transmits to the server 20 the information related to its own position and the position information of the first robot 10A estimated by star reckoning.
  • the second robot 10B transmits to the server 20 the information about the shooting time of the image used for estimating the position information of the first robot 10A (S129).
  • the estimation unit 221 included in the server 20 superimposes the probability distribution of the position information of the first robot 10A estimated by the second robot 10B and the probability distribution of the self-position received from the first robot 10A.
  • the position information of the first robot 10A is estimated (S133).
  • the self-position of the first robot 10A on which the probability distribution is superimposed is obtained by the first robot 10A at the same time as the photographing time of the image used by the second robot 10B for estimating the position information of the first robot 10A. It is desirable that the self-position is estimated by
  • the estimation unit 221 included in the server 20 overlaps the probability distribution of the position information of the second robot 10B estimated by the first robot 10A and the probability distribution of the self-position received from the second robot 10B.
  • the position information of the second robot 10B is estimated (S137).
  • the self-position of the second robot 10B on which the probability distribution is superimposed is the second robot 10B at the same time as the time when the image used by the first robot 10A for estimating the position information of the second robot 10B was captured. It is desirable that the self-position is estimated by
  • the correction unit 225 included in the server 20 generates correction information regarding the position information of the first robot 10A estimated by the estimation unit 221 and correction information regarding the position information of the second robot 10B (S141).
  • the communication unit 210 included in the server 20 transmits the generated correction information regarding the position information of the first robot 10A to the first robot 10A (S145).
  • the communication unit 210 included in the server 20 transmits the generated correction information regarding the position information of the second robot 10B to the second robot 10B (S149).
  • the first robot 10A corrects its own position based on the correction information received from the server 20 (S153).
  • the second robot 10B corrects the self-position of the second robot 10B based on the correction information received from the server 20 (S157), and the information processing system according to the present disclosure ends the processing.
  • the server 20 can estimate the position information of any robot 10 with higher accuracy. By estimating the position information, it is possible to correct the position information of the plurality of robots 10 .
  • the correction unit 225 a specific example of correcting the position information of the robot 10 by the correction unit 225 according to the second embodiment will be described with reference to FIG.
  • FIG. 6 is an explanatory diagram for explaining an example of correcting the position information of the robot 10 according to the second embodiment.
  • the robot 10 can estimate the self-position of the robot 10 with higher accuracy by photographing the marker MA as shown in FIG. 3 and performing star reckoning.
  • the robot 10 corrects the recognized self-position from the self-position estimated by dead reckoning to the self-position estimated by star reckoning, thereby recognizing the self-position with higher accuracy.
  • the rapid correction of the self-position recognized by the robot 10 makes it necessary to return to the changed target path, which may make the operation of the robot 10 unstable.
  • the correction unit 225 may continuously correct the position information of the robot 10 .
  • the correction unit 225 may continuously correct the position information of the robot 10 according to the movement distance of the robot 10 .
  • the robot 10 moves while estimating its own position by dead reckoning.
  • the position of the robot 10 estimated by dead reckoning is assumed to be a self-position SL1.
  • an estimation error may increase according to the moving distance of the robot 10 as described above.
  • the camera 110 acquires an image of the marker.
  • the estimation unit 161 estimates the self-position of the robot 10 by star reckoning based on the acquired image.
  • the self position recognized by the robot 10 by dead reckoning at the time of photographing the marker is assumed to be self position ST
  • the self position estimated by star reckoning is assumed to be estimated position STL.
  • the estimation unit 221 continues to estimate the self-position of the robot 10 by dead reckoning when the robot 10 moves.
  • the corrected position CL1 of the robot 10 when corrected to the position estimated by star reckoning and the uncorrected position SL2 of the robot 10 when not corrected to the position estimated by star reckoning are always kept. presume.
  • the correction unit 225 may continuously correct the position information of the robot 10 according to the movement distance of the robot 10 based on the corrected position CL1 and the uncorrected position SL2.
  • the correction unit 225 may constantly calculate the corrected position CL1 and the uncorrected position SL2, and change the usage ratio in the extended Kalman filter according to the movement distance of the robot 10, for example.
  • the correction unit 225 of the robot 10 which has captured a certain marker and estimated its own position by star reckoning, will obtain a post-correction position CL1 after correction by star reckoning until it reaches the area where the marker is installed next time. may be continuously corrected so that the usage ratio of is from 0% to 100%. This makes it possible to suppress sudden changes in the self-position recognized by the robot 10 .
  • the position of the robot 10 that is continuously corrected by the correction unit 225 is defined as a corrected position CL2.
  • the deviation between the actual position RL2, which is the actual position of the robot 10, and the corrected position CL2 is suppressed, and the rapid self-position of the robot 10 is corrected. can reduce the possibility that the operation of the robot 10 becomes unstable.
  • the correction unit 225 may continuously correct the self-position recognized by the robot 10 to the position of the robot 10 estimated by the estimation unit 221 according to the first embodiment by superimposing Gaussian distributions. .
  • FIG. 7 is an explanatory diagram for explaining an example of operation processing for correcting the position information of the robot 10 according to the second embodiment.
  • the robot 10 estimates its own position by dead reckoning (S201).
  • the robot 10 transmits information on the estimated self-location to the server 20 (S205).
  • the robot 10 may always transmit to the server 20 information about its own position estimated by dead reckoning.
  • the camera 110 provided in the robot 10 takes a picture of the marker and acquires an image (S209).
  • the robot 10 transmits the acquired image to the server 20 (S213).
  • the estimation unit 221 provided in the server 20 estimates the position information of the robot 10 based on the information and the image regarding the self-position acquired from the robot 10 (S217).
  • the server 20 generates correction information for correcting the position information of the robot 10 (S221), and transmits the correction information to the robot 10 (S225).
  • the correction information includes information for continuously correcting the self-position of the robot 10 as described above.
  • the robot 10 corrects the self-position of the robot 10 according to the correction information and the movement distance of the robot 10 (S229), and ends the processing of the information processing system according to the present disclosure.
  • FIG. 8 is an explanatory diagram for describing a modification of the robot 10 according to the present disclosure.
  • the robot 10 according to the present disclosure may include a low power consumption microcontroller (MCU: Microcontroller Unit) MC that can operate using regenerative energy of the wheels.
  • the microcontroller MC may implement some or all of the functions of the controller 160 .
  • regenerated power from the power supply (Vcc) of the motor driver (DRV) may be input to the microcontroller MC.
  • the microcontroller MC may input sensing information from the IMU 120 and the wheel encoder 130 and continuously estimate the self-position of the robot 10 by dead reckoning even while the main power is off.
  • the robot 10 can estimate its own position even if the position of the robot 10 is moved by an external force (for example, manual work by the user) when the main power is off.
  • an external force for example, manual work by the user
  • the main power supply of the robot 10 when the main power supply of the robot 10 is turned on, it is possible to eliminate the need to perform matching at all positions on the map, making it easier for star recognition to converge. Also, when the main power is turned on, it may be possible to omit the specifying operation by the user. As above, the modified example of the robot 10 according to the present disclosure has been described.
  • Hardware configuration example >> The embodiment according to the present disclosure has been described above. The information processing described above is realized by cooperation between software and hardware of the server 20 described below. Note that the hardware configuration described below can also be applied to the robot 10 .
  • FIG. 9 is a block diagram showing the hardware configuration of the server 20.
  • the server 20 comprises a CPU (Central Processing Unit) 2001 , a ROM (Read Only Memory) 2002 , a RAM (Random Access Memory) 2003 and a host bus 2004 .
  • the server 20 also includes a bridge 2005 , an external bus 2006 , an interface 2007 , an input device 2008 , an output device 2010 , a storage device (HDD) 2011 , a drive 2012 and a communication device 2015 .
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 2001 functions as an arithmetic processing device and a control device, and controls overall operations within the server 20 according to various programs. Also, the CPU 2001 may be a microprocessor.
  • a ROM 2002 stores programs, calculation parameters, and the like used by the CPU 2001 .
  • a RAM 2003 temporarily stores programs used in the execution of the CPU 2001, parameters that change as appropriate during the execution, and the like. These are interconnected by a host bus 2004 comprising a CPU bus or the like. Functions such as the estimation unit 221 and the correction unit 225 described with reference to FIG.
  • the host bus 2004 is connected via a bridge 2005 to an external bus 2006 such as a PCI (Peripheral Component Interconnect/Interface) bus.
  • an external bus 2006 such as a PCI (Peripheral Component Interconnect/Interface) bus.
  • PCI Peripheral Component Interconnect/Interface
  • the input device 2008 includes input means for the user to input information, such as a mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input control circuit that generates an input signal based on the user's input and outputs it to the CPU 2001 . etc.
  • input device 2008 By operating the input device 2008, the user of the server 20 can input various data to the server 20 and instruct processing operations.
  • the output device 2010 includes display devices such as liquid crystal display devices, OLED devices, and lamps, for example. Further, output device 2010 includes audio output devices such as speakers and headphones. The output device 2010 outputs reproduced content, for example. Specifically, the display device displays various information such as reproduced video data as text or images. On the other hand, the audio output device converts reproduced audio data and the like into audio and outputs the audio.
  • the storage device 2011 is a device for storing data.
  • the storage device 2011 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
  • the storage device 2011 is composed of, for example, an HDD (Hard Disk Drive).
  • the storage device 2011 drives a hard disk and stores programs executed by the CPU 2001 and various data.
  • the drive 2012 is a reader/writer for storage media, and is built in or externally attached to the server 20 .
  • the drive 2012 reads out information recorded on a removable storage medium 2018 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 2003 .
  • Drive 2012 can also write information to removable storage media 3018 .
  • the communication device 2015 is, for example, a communication interface configured with a communication device or the like for connecting to the network 1. Also, the communication device 2015 may be a wireless LAN compatible communication device, an LTE (Long Term Evolution) compatible communication device, or a wire communication device that performs wired communication.
  • LTE Long Term Evolution
  • the robot 10 as an example of an information processing device, may have a configuration having various functions of the server 20 described in this specification.
  • the function of the estimation unit 221 provided in the server 20 may be realized by the estimation unit 161 provided in the robot 10 , and the robot 10 may further include a configuration having the function of the correction unit 225 .
  • the server 20 may acquire position information of the third robot estimated by each of the first robot 10A and the second robot 10B by star reckoning. Then, the estimating unit 221 may estimate the position of the third robot by superimposing the two acquired Gaussian distributions of the position information of the third robot.
  • each step in the processing of the information processing system in this specification does not necessarily have to be processed in chronological order according to the order described as the flowchart.
  • each step in the processing of the information processing system may be processed in an order different from the order described as the flowchart or in parallel.
  • an acquisition unit that acquires a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of the plurality of moving objects; an estimating unit that estimates the position information of the arbitrary moving object by superimposing the probability distributions of the plurality of position information of the arbitrary moving object acquired by the acquiring unit;
  • An information processing device (2) the arbitrary moving body is included in the plurality of moving bodies, The acquisition unit Acquiring information related to the self-position of the arbitrary mobile body and position information of the arbitrary mobile body estimated by another mobile body different from the arbitrary mobile body among the plurality of mobile bodies, The information processing device according to (1) above.
  • the acquisition unit Acquiring information about the time when the other mobile body acquired the position information of the arbitrary mobile body;
  • the information processing device according to (2) above.
  • the estimation unit The position information of the arbitrary mobile body estimated by the other mobile body and the self-position of the arbitrary mobile body acquired at the time when the other mobile body acquired the position information of the arbitrary mobile body estimating the position information of the arbitrary moving object by superimposing the probability distribution with such information;
  • the acquisition unit Acquiring information related to the self-position of the arbitrary mobile body based on sensing information obtained by an internal sensor mounted on the arbitrary mobile body;
  • the information processing apparatus according to any one of (1) to (4).
  • the acquisition unit Acquiring position information of the arbitrary mobile body estimated based on sensing information obtained by an external sensor mounted on the other mobile body; The information processing device according to (5) above.
  • the acquisition unit The arbitrary mobile body estimated based on the sensing information obtained by the external sensor mounted on the other mobile body when the arbitrary mobile body and the other mobile body are within a predetermined distance. to get the location of The information processing device according to (6) above.
  • a correction unit that corrects the position information of the arbitrary moving body to the position information estimated by the estimation unit;
  • the information processing apparatus according to any one of (1) to (7), further comprising: (9) The acquisition unit Acquiring position information of the arbitrary mobile body obtained by detecting a feature installed in a predetermined area by an external sensor mounted on the arbitrary mobile body, The correction unit is correcting the position information of the arbitrary moving body to the position information of the arbitrary moving body obtained by the external sensor detecting the feature; The information processing device according to (8) above. (10) The correction unit is continuously correcting the position information of the arbitrary moving object; The information processing device according to (9) above.
  • the correction unit is continuously correcting the position information of the arbitrary moving body according to the movement distance of the arbitrary moving body;
  • the information processing device according to (10) above.
  • the estimation unit estimating position information when the position information of the arbitrary moving body is not corrected and position information when the position information of the arbitrary moving body is corrected when the arbitrary moving body moves;
  • the correction unit is the arbitrary moving object based on the position information when the position information of the arbitrary moving object estimated by the estimation unit is not corrected and the position information when the position information of the arbitrary moving object is corrected; to correct the location information of
  • the information processing device (11) above.
  • the correction unit is continuously correcting the position information of the arbitrary moving body until the arbitrary moving body reaches the area where the next feature is installed from the area where the feature is installed;
  • the information processing device according to (12) above.
  • the plurality of mobile bodies have wheels, and self-positions are always estimated using regenerative energy generated by the rotation of the wheels.
  • the information processing apparatus according to any one of (1) to (13) above.
  • a computer-implemented information processing method comprising: (16) to the computer, an acquisition function for acquiring a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of a plurality of moving objects; an estimating function for estimating the location information of the arbitrary moving object by superimposing the probability distributions of the plurality of location information of the arbitrary moving object acquired by the acquiring function;

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

To estimate the position of a mobile object at a high precision, an information processing device comprises an acquisition unit that acquires a plurality of pieces of position information on an arbitrary mobile object on the basis of sensing information obtained from a sensor mounted in each of a plurality of mobile objects, and an estimation unit that superimposes probability distributions of the plurality of pieces of position information on the arbitrary mobile object acquired by the acquisition unit on one another to estimate the information on the position of the arbitrary mobile object.

Description

情報処理装置、情報処理方法およびプログラムInformation processing device, information processing method and program
 本開示は、情報処理装置、情報処理方法およびプログラムに関する。 The present disclosure relates to an information processing device, an information processing method, and a program.
 近年、ロボットなどの移動体の位置を推定する技術が開発されている。例えば、ロボットはある環境内の移動に際して、ロボットの内界センサのセンシング情報を用いたデッドレコニングにより自己位置を推定する。また、ロボットは、外界センサがマーカ等の地物を検出して得られたセンシング情報を用いたスターレコニングにより自己位置を推定する。 In recent years, technologies for estimating the position of mobile objects such as robots have been developed. For example, when a robot moves in a certain environment, it estimates its own position by dead reckoning using sensing information from an internal sensor of the robot. In addition, the robot estimates its own position by star reckoning using sensing information obtained by an external sensor detecting a feature such as a marker.
 例えば、特許文献1では、デッドレコニングによる推定誤差が小さいロボットが推定誤差の大きい他のロボットの位置を推定し、推定した位置情報を当該他のロボットに与える技術が開示されている。 For example, Patent Document 1 discloses a technique in which a robot with a small estimation error due to dead reckoning estimates the position of another robot with a large estimation error, and provides the estimated position information to the other robot.
特開平7-129237号公報JP-A-7-129237
 しかし、特許文献1に記載の技術では、2機のロボットの双方が共に推定誤差が大きかった場合において、双方のロボットは互いの位置を高い精度で推定することが困難であった。 However, with the technology described in Patent Document 1, it is difficult for both robots to estimate each other's positions with high accuracy when both of the two robots have large estimation errors.
 そこで、本開示では、移動体の位置をより高い精度で推定することが可能な、新規かつ改良された情報処理装置、情報処理方法およびプログラムを提案する。 Therefore, the present disclosure proposes a new and improved information processing device, information processing method, and program capable of estimating the position of a mobile object with higher accuracy.
 本開示によれば、複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得する取得部と、前記取得部により取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する推定部と、を備える、情報処理装置が提供される。 According to the present disclosure, an acquisition unit that acquires a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of a plurality of moving objects; and an estimating unit for estimating position information of the arbitrary moving body by superimposing probability distributions of a plurality of position information of the moving bodies.
 また、本開示によれば、複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得することと、取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定することと、を含む、コンピュータにより実行される情報処理方法が提供される。 Further, according to the present disclosure, acquiring a plurality of position information of an arbitrary mobile object based on sensing information obtained by sensors mounted on each of a plurality of mobile objects; and estimating the position information of the arbitrary moving object by superimposing the probability distributions of a plurality of position information of the computer-implemented information processing method.
 また、本開示によれば、コンピュータに、複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得する取得機能と、前記取得機能により取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する推定機能と、を実現させる、プログラムが提供される。 Further, according to the present disclosure, the computer has an acquisition function for acquiring a plurality of position information of an arbitrary moving body based on sensing information obtained by sensors mounted on each of a plurality of moving bodies, and the acquisition function and an estimating function of estimating the position information of the arbitrary moving object by superimposing the obtained probability distributions of the plurality of position information of the arbitrary moving object.
本開示に係る情報処理システムの一例を説明するための説明図である。1 is an explanatory diagram for explaining an example of an information processing system according to the present disclosure; FIG. 本開示に係る情報処理システムの機能構成例を説明するための説明図である。It is an explanatory view for explaining an example of functional composition of an information processing system concerning this indication. 本開示に係るロボットの位置推定に係る一例を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining an example related to position estimation of a robot according to the present disclosure; 第1の実施例に係る推定部がロボットの位置情報を推定する概要を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining an overview of how an estimation unit according to the first embodiment estimates position information of a robot; 第1の実施例に係る確率分布の重ね合わせによるロボットの位置情報を推定する動作処理の一例を説明するための説明図である。FIG. 7 is an explanatory diagram for explaining an example of operation processing for estimating position information of a robot by superimposing probability distributions according to the first embodiment; 第2の実施例に係るロボットの位置情報を補正する一例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of correcting the position information of the robot according to the second embodiment; 第2の実施例に係るロボットの位置情報を補正する動作処理の一例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of operation processing for correcting position information of the robot according to the second embodiment; 本開示に係るロボットの変形例について説明するための説明図である。FIG. 11 is an explanatory diagram for describing a modification of the robot according to the present disclosure; サーバのハードウェア構成を示したブロック図である。It is a block diagram showing the hardware configuration of the server.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. In the present specification and drawings, constituent elements having substantially the same functional configuration are denoted by the same reference numerals, thereby omitting redundant description.
 また、本明細書および図面の説明は、複数のロボットを第1のロボット10A、および第2のロボット10Bのように必要に応じてロボットを区別する。ただし、ロボットを区別する必要がない場合は、ロボットを単にロボット10と称する。 Also, in the description of this specification and drawings, robots are distinguished as needed such as a first robot 10A and a second robot 10B. However, the robots are simply referred to as robots 10 when there is no need to distinguish between the robots.
 また、以下に示す項目順序に従って当該「発明を実施するための形態」を説明する。
  1.情報処理システムの概要
  1.1.情報処理システムの機能構成例
  2.実施例
  2.1.概要
  2.2.第1の実施例
  2.3.第2の実施例
  3.変形例
  4.ハードウェア構成例
  5.補足
In addition, the "Mode for Carrying Out the Invention" will be explained according to the order of items shown below.
1. Outline of information processing system 1.1. Functional configuration example of information processing system 2 . Example 2.1. Overview 2.2. First embodiment 2.3. Second embodiment 3. Modification 4. Hardware configuration example 5 . supplement
 <<1.情報処理システムの概要>>
 本開示の一実施形態として、移動体の位置をより高い精度で推定する仕組みについて説明する。
<<1. Information processing system overview >>
As one embodiment of the present disclosure, a mechanism for estimating the position of a mobile object with higher accuracy will be described.
 図1は、本開示に係る情報処理システムの一例を説明するための説明図である。本開示に係る情報処理システムは、ネットワーク1と、第1のロボット10Aと、第2のロボット10Bと、サーバ20と、を有する。 FIG. 1 is an explanatory diagram for explaining an example of an information processing system according to the present disclosure. An information processing system according to the present disclosure includes a network 1, a first robot 10A, a second robot 10B, and a server 20.
 (ネットワーク1)
 本開示に係るネットワーク1は、ネットワーク1に接続されている装置から送信される情報の有線、または無線の伝送路である。例えば、ネットワーク1は、インターネット、電話回線網、衛星通信網などの公衆回線網や、Ethernet(登録商標)を含む各種のLAN(Local Area Network)、WAN(Wide Area Network)などを含んでもよい。また、ネットワーク1は、IP-VPN(Internet Protocol-Virtual Private Network)などの専用回線網を含んでもよい。
(Network 1)
A network 1 according to the present disclosure is a wired or wireless transmission path for information transmitted from devices connected to the network 1 . For example, the network 1 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various LANs (Local Area Networks) including Ethernet (registered trademark), WANs (Wide Area Networks), and the like. The network 1 may also include a dedicated line network such as IP-VPN (Internet Protocol-Virtual Private Network).
 第1のロボット10Aおよびサーバ20と、第2のロボット10Bおよびサーバ20とは、それぞれネットワーク1を介して接続される。 The first robot 10A and server 20 and the second robot 10B and server 20 are connected via network 1, respectively.
 (ロボット10)
 本開示に係るロボット10は、移動体の一例であり、ある環境内を自律的に移動する。なお、後述する第1の実施例では、第1のロボット10Aおよび第2のロボット10Bのように2機のロボット10を複数のロボット10の一例として説明するが、複数のロボット10の数量は3機以上であってもよい。また、第2の実施例および変形例において説明するロボット10の数量は1機であってもよい。
(Robot 10)
The robot 10 according to the present disclosure is an example of a mobile object and autonomously moves within a certain environment. In the first embodiment described later, two robots 10 such as the first robot 10A and the second robot 10B are described as an example of the plurality of robots 10, but the number of the plurality of robots 10 is three. machine or more. Also, the number of robots 10 described in the second embodiment and modification may be one.
 また、ロボット10は、ロボット10が備えるセンサにより取得されたセンシング情報に基づき、ロボット10の自己位置または他のロボット10の位置を推定する。または、ロボット10は、センサにより取得されたセンシング情報をサーバ20に送信してもよい。この場合、センシング情報を受信したサーバ20は、受信したセンシング情報に基づき、ロボット10の位置を推定してもよい。ロボット10が備えるセンサの詳細については後述する。 Also, the robot 10 estimates its own position or the positions of other robots 10 based on the sensing information acquired by the sensors provided in the robot 10 . Alternatively, the robot 10 may transmit sensing information acquired by a sensor to the server 20 . In this case, the server 20 that has received the sensing information may estimate the position of the robot 10 based on the received sensing information. The details of the sensors included in the robot 10 will be described later.
 (サーバ20)
 本開示に係るサーバ20は、情報処理装置の一例であり、複数のロボット10の各々が備える各センサにより得られたセンシング情報に基づく任意のロボット10の複数の位置情報を取得する。例えば、サーバ20は、第1のロボット10Aが備えるセンサにより得られたセンシング情報に基づく第1のロボット10Aの位置情報および第2のロボット10Bの位置情報を取得する。また、サーバ20は、第2のロボット10Bが備えるセンサにより得られたセンシング情報に基づく第1のロボット10Aの位置情報および第2のロボット10Bの位置情報を取得する。
(Server 20)
The server 20 according to the present disclosure is an example of an information processing device, and acquires a plurality of pieces of position information of an arbitrary robot 10 based on sensing information obtained by each sensor included in each of the plurality of robots 10 . For example, the server 20 acquires the positional information of the first robot 10A and the positional information of the second robot 10B based on the sensing information obtained by the sensors of the first robot 10A. Also, the server 20 acquires the position information of the first robot 10A and the position information of the second robot 10B based on the sensing information obtained by the sensors of the second robot 10B.
 また、サーバ20は、任意のロボット10の複数の位置情報の確率分布を重ね合わせることにより、任意のロボット10の位置情報を推定する。例えば、サーバ20は、第1のロボット10Aおよび第2のロボット10Bの各々から取得した第1のロボット10Aの2つの位置情報の確率分布を重ね合わせることにより第1のロボット10Aの位置情報を推定してもよい。また、サーバ20は、第1のロボット10Aおよび第2のロボット10Bの各々から取得した第2のロボット10Bの2つの位置情報の確率分布を重ね合わせることにより第2のロボット10Bの位置情報を推定してもよい。 Also, the server 20 estimates the position information of any robot 10 by superimposing the probability distributions of a plurality of position information of any robot 10 . For example, the server 20 estimates the position information of the first robot 10A by superimposing the probability distributions of two pieces of position information of the first robot 10A acquired from each of the first robot 10A and the second robot 10B. You may In addition, the server 20 estimates the position information of the second robot 10B by superimposing the probability distributions of the two pieces of position information of the second robot 10B acquired from each of the first robot 10A and the second robot 10B. You may
 以上、本開示に係る情報処理システムの概要を説明した。続いて、図2を参照し、本開示に係る情報処理システムの機能構成例を説明する。 The outline of the information processing system according to the present disclosure has been described above. Next, a functional configuration example of an information processing system according to the present disclosure will be described with reference to FIG.
 <<1.1.情報処理システムの機能構成例>>
 図2は、本開示に係る情報処理システムの機能構成例を説明するための説明図である。
<<1.1. Example of functional configuration of information processing system>>
FIG. 2 is an explanatory diagram for explaining a functional configuration example of an information processing system according to the present disclosure.
 (ロボット10)
 本開示に係るロボット10は、図2に示すように、カメラ110と、IMU(Inertial Measurement Unit)120と、車輪エンコーダ130と、記憶部140と、通信部150と、制御部160と、を備える。
(Robot 10)
The robot 10 according to the present disclosure includes a camera 110, an IMU (Inertial Measurement Unit) 120, a wheel encoder 130, a storage unit 140, a communication unit 150, and a control unit 160, as shown in FIG. .
 本開示に係るカメラ110は、外界センサの一例であり、撮影により画像を含むセンシング情報を取得する装置である。例えば、カメラ110は、他のロボット10やAlvarマーカを撮影することによりカメラ110から他のロボット10またはAlvarマーカまでの距離情報を含むセンシング情報を取得する。 The camera 110 according to the present disclosure is an example of an external sensor, and is a device that acquires sensing information including an image by photographing. For example, the camera 110 acquires sensing information including distance information from the camera 110 to the other robot 10 or the Alvar marker by photographing the other robot 10 or the Alvar marker.
 また、本開示に係るカメラ110は、カメラ110が撮影した時刻を取得するため、赤外線LEDとIRカットフィルタのないカメラであることが望ましい。なお、Alvarマーカは、地物の一例である。また、以下の説明では、Alvarマーカを単にマーカと表現する場合がある。 Also, the camera 110 according to the present disclosure is preferably a camera without an infrared LED and an IR cut filter, in order to acquire the time when the camera 110 captured the image. Note that the Alvar marker is an example of a feature. Also, in the following description, the Alvar marker may simply be referred to as a marker.
 本開示に係るIMU120は、内界センサの一例であり、ロボット10の慣性運動を検出する装置である。例えば、IMU120は、ロボット10の移動に際して、ロボット10の車輪の角加速度情報を含むセンシング情報を取得する。 The IMU 120 according to the present disclosure is an example of an internal sensor, and is a device that detects inertial motion of the robot 10 . For example, the IMU 120 acquires sensing information including angular acceleration information of the wheels of the robot 10 when the robot 10 moves.
 本開示に係る車輪エンコーダ130は、内界センサの一例であり、ロボット10の移動に際して、ロボット10の車輪の車速パルスを含むセンシング情報を取得する。 The wheel encoder 130 according to the present disclosure is an example of an internal sensor, and acquires sensing information including vehicle speed pulses of the wheels of the robot 10 when the robot 10 moves.
 本開示に係る記憶部140は、ソフトウェアおよび各種データを保持する。例えば、記憶部140は、ロボット10が移動する環境のマップ情報やロボット10の自己位置に係る情報を保持する。また、記憶部140は、ロボット10が移動する環境内に設置されたマーカの位置情報を保持する。また、記憶部140は、環境内における目標経路に係る情報を保持してもよい。 The storage unit 140 according to the present disclosure holds software and various data. For example, the storage unit 140 holds map information of the environment in which the robot 10 moves and information related to the self-position of the robot 10 . The storage unit 140 also holds position information of markers placed in the environment in which the robot 10 moves. In addition, the storage unit 140 may hold information related to the target route in the environment.
 本開示に係る通信部150は、サーバ20が備える通信部210との間で各種通信を行う。例えば、通信部150は、推定部161により推定されたロボット10の自己位置に係る情報をサーバ20に送信する。また、通信部150は、推定部161により推定された他のロボット10の位置情報をサーバ20に送信してもよい。 The communication unit 150 according to the present disclosure performs various communications with the communication unit 210 included in the server 20. For example, the communication unit 150 transmits information regarding the self-position of the robot 10 estimated by the estimation unit 161 to the server 20 . Also, the communication unit 150 may transmit the position information of the other robot 10 estimated by the estimation unit 161 to the server 20 .
 また、通信部150は、カメラ110、IMU120および車輪エンコーダ130により得られた各種センシング情報をサーバ20に送信してもよい。 Also, the communication unit 150 may transmit various sensing information obtained by the camera 110 , the IMU 120 and the wheel encoder 130 to the server 20 .
 本開示に係る制御部160は、ロボット10の動作全般を制御する。本開示に係る制御部160は、図2に示すように、推定部161を備える。 The control unit 160 according to the present disclosure controls the overall operation of the robot 10. The control unit 160 according to the present disclosure includes an estimation unit 161 as shown in FIG. 2 .
 本開示に係る推定部161は、ロボット10が備えるセンサにより得られたセンシング情報に基づき、ロボット10の位置を推定する。例えば、推定部161は、IMU120により得られた角速度に係る情報、および車輪エンコーダ130により得られた車速パルスに基づき、ロボット10の自己位置を推定する。このような内界センサを用いたロボット10の自己位置の推定方法をデッドレコニングと称する。 The estimating unit 161 according to the present disclosure estimates the position of the robot 10 based on sensing information obtained by sensors provided in the robot 10 . For example, the estimator 161 estimates the self-position of the robot 10 based on the angular velocity information obtained by the IMU 120 and the vehicle speed pulse obtained by the wheel encoder 130 . A method of estimating the self-position of the robot 10 using such an internal sensor is called dead reckoning.
 また、本開示に係る推定部161は、カメラ110がマーカを撮影して得られたカメラ110からマーカまでの距離情報に基づき、ロボット10の自己位置を推定する。また、推定部161は、カメラ110が他のロボット10を撮影して得られたカメラ110から他のロボット10までの距離情報に基づき、当該他のロボット10の位置を推定してもよい。このような外界センサを用いたロボット10の位置情報の推定方法をスターレコニングと称する。 Also, the estimation unit 161 according to the present disclosure estimates the self-position of the robot 10 based on the distance information from the camera 110 to the marker obtained by the camera 110 photographing the marker. Further, the estimation unit 161 may estimate the position of the other robot 10 based on distance information from the camera 110 to the other robot 10 obtained by photographing the other robot 10 with the camera 110 . A method of estimating the position information of the robot 10 using such an external sensor is called star reckoning.
 なお、詳細は後述するが、推定部161により推定されるロボット10の自己位置に係る情報や他のロボット10の位置情報は、それぞれ確率分布に従った誤差円を有する。また、本開示に係る確率分布とは、ガウス分布を一例とする。 Although the details will be described later, the information related to the self-position of the robot 10 estimated by the estimation unit 161 and the position information of the other robots 10 each have an error circle according to a probability distribution. Moreover, the probability distribution according to the present disclosure is an example of Gaussian distribution.
 (サーバ20)
 本開示に係るサーバ20は、図2に示すように、通信部210と、制御部220と、を備える。
(Server 20)
The server 20 according to the present disclosure includes a communication unit 210 and a control unit 220, as shown in FIG.
 本開示に係る通信部210は、ロボット10との間で各種通信を行う。例えば、通信部210は、取得部の一例であり、複数のロボット10から任意のロボット10の位置情報を受信する。例えば、通信部210は、第1のロボット10Aから第1のロボット10Aの自己位置に係る情報及び第2のロボット10Bの位置情報を受信する。また、通信部210は、第2のロボット10Bから第2のロボット10Bの自己位置に係る情報および第1のロボット10Aの位置情報を受信する。 The communication unit 210 according to the present disclosure performs various communications with the robot 10. For example, the communication unit 210 is an example of an acquisition unit and receives position information of an arbitrary robot 10 from a plurality of robots 10 . For example, the communication unit 210 receives information regarding the self-position of the first robot 10A and position information of the second robot 10B from the first robot 10A. Further, the communication unit 210 receives information related to the self-position of the second robot 10B and position information of the first robot 10A from the second robot 10B.
 また、通信部210は、複数のロボット10から各種センシング情報を受信してもよい。例えば、通信部210は、第1のロボット10Aおよび第2のロボット10Bの各々が備えるカメラ110、IMU120および車輪エンコーダ130により得られた各種センシング情報を第1のロボット10Aおよび第2のロボット10Bから受信してもよい。 Also, the communication unit 210 may receive various sensing information from a plurality of robots 10 . For example, the communication unit 210 receives various sensing information obtained by the camera 110, the IMU 120, and the wheel encoder 130 of each of the first robot 10A and the second robot 10B from the first robot 10A and the second robot 10B. may receive.
 本開示に係る制御部220は、サーバ20の動作全般を制御する。制御部220は、図2に示すように、推定部221と、補正部225と、を備える。 The control unit 220 according to the present disclosure controls the overall operation of the server 20. The control unit 220 includes an estimation unit 221 and a correction unit 225, as shown in FIG.
 推定部221は、通信部210が受信した任意のロボット10の複数の位置情報の確率分布を重ね合わせることにより、任意のロボット10の位置を推定する。詳細については後述する。 The estimating unit 221 estimates the position of an arbitrary robot 10 by superimposing the probability distributions of a plurality of pieces of positional information of the arbitrary robot 10 received by the communication unit 210 . Details will be described later.
 また、本開示に係る推定部221は、取得部であってもよい。例えば、通信部210がロボット10から各種センシング情報を受信し、推定部221は、当該センシング情報に基づき、任意のロボット10の位置情報を推定してもよい。例えば、任意のロボット10を第1のロボット10Aとした場合、推定部221は、通信部210が第1のロボット10Aから受信した各種センシング情報に基づき、第1のロボット10Aの位置情報を推定してもよい。更に、推定部221は、通信部210が第2のロボット10Bから受信した各種センシング情報に基づき、第1のロボット10Aの位置情報を推定してもよい。 Also, the estimation unit 221 according to the present disclosure may be an acquisition unit. For example, the communication unit 210 may receive various sensing information from the robot 10, and the estimating unit 221 may estimate position information of any robot 10 based on the sensing information. For example, if the arbitrary robot 10 is the first robot 10A, the estimation unit 221 estimates the position information of the first robot 10A based on various sensing information received by the communication unit 210 from the first robot 10A. may Furthermore, the estimation unit 221 may estimate the position information of the first robot 10A based on various sensing information received by the communication unit 210 from the second robot 10B.
 そして、推定部221は、推定した任意のロボット10の複数の位置情報の確率分布を重ね合わせることにより、当該任意のロボット10の位置情報を推定してもよい。 Then, the estimation unit 221 may estimate the position information of the arbitrary robot 10 by superimposing the probability distributions of the estimated plurality of position information of the arbitrary robot 10 .
 補正部225は、任意のロボット10の位置情報を推定部221により推定された位置情報に補正する。例えば、補正部225は、推定部221により推定された位置情報に基づく補正情報を通信部210に送信させる。 The correction unit 225 corrects the position information of any robot 10 to the position information estimated by the estimation unit 221 . For example, the correction unit 225 causes the communication unit 210 to transmit correction information based on the position information estimated by the estimation unit 221 .
 以上、本開示に係る情報処理システムの機能構成例を説明した。続いて、図3を参照し、本開示に係る情報処理システムの実施例を説明する。 The functional configuration example of the information processing system according to the present disclosure has been described above. Next, an embodiment of an information processing system according to the present disclosure will be described with reference to FIG.
 <<2.実施例>>
 <<2.1.概要>>
 図3は、本開示に係るロボット10の位置推定に係る一例を説明するための説明図である。まず、本開示に係るロボット10は、ある環境内(例えば、工場等)を自律的に移動する。この際に、ロボット10は、上述したデッドレコニングを常時行うことで自己位置を推定しながら環境内を移動する。
<<2. Example>>
<<2.1. Overview>>
FIG. 3 is an explanatory diagram for explaining an example related to position estimation of the robot 10 according to the present disclosure. First, the robot 10 according to the present disclosure autonomously moves within an environment (for example, a factory or the like). At this time, the robot 10 moves in the environment while estimating its own position by constantly performing the above-described dead reckoning.
 一方、デッドレコニングは、例えば、ロボット10の車輪の滑り等の外乱の影響を受けることで、自己位置に推定誤差が生じ得る。即ち、ロボット10の移動距離が長くなればなる程、デッドレコニングによる自己位置の推定誤差が生じている確率や誤差量は増加され得る。このように、ロボット10の自己位置は、例えば、ガウス分布の誤差円(以下、単に誤差円と表現する場合がある。)を有しており、移動距離の増加に応じて誤差円は大きくなり得る。 On the other hand, dead reckoning can cause errors in estimating the self-position due to disturbances such as wheel slippage of the robot 10 . In other words, the longer the moving distance of the robot 10 is, the more the probability and amount of error in estimating the self-position due to dead reckoning can increase. In this way, the self-position of the robot 10 has, for example, a Gaussian distribution error circle (hereinafter sometimes simply referred to as an error circle), and the error circle becomes larger as the movement distance increases. obtain.
 そして、誤差円が大きくなることにより、ロボット10により推定された自己位置の信頼性は低くなり得る。そこで、ロボット10は、予め定められた位置に配置されたマーカMAを撮影して得られたセンシング情報に基づくスターレコニングを行うことにより、当該ロボット10の自己位置を推定する。マーカは予め定められた位置に配置されているため、ロボット10は、デッドレコニングと比較して、自己位置をより高い精度で推定し得る。そのため、ロボット10が移動する環境には、少なくとも1以上のマーカMAが設置されていることが望ましい。 Then, as the error circle becomes larger, the reliability of the self-position estimated by the robot 10 may become lower. Therefore, the robot 10 estimates its own position by performing star reckoning based on sensing information obtained by photographing the marker MA placed at a predetermined position. Since the markers are placed at predetermined positions, the robot 10 can estimate its own position with higher accuracy than dead reckoning. Therefore, it is desirable that at least one or more markers MA are installed in the environment in which the robot 10 moves.
 ロボット10は、デッドレコニングにより推定されたロボット10の自己位置から、マーカMAを用いたスターレコニングにより推定されたロボット10の自己位置に更新することで、ロボット10の自己位置が有する誤差円も小さくなり得る。 By updating the self-position of the robot 10 estimated by dead reckoning to the self-position of the robot 10 estimated by star reckoning using the marker MA, the error circle of the self-position of the robot 10 is also reduced. can be.
 例えば、図3に示すように、第1のロボット10Aおよび第2のロボット10Bは、それぞれデッドレコニングにより、自己位置を推定しながら環境内を自律的に移動する。そして、マーカMAの撮影範囲に近づいた第1のロボット10Aは、スターレコニングにより、更に、自己位置を推定する。 For example, as shown in FIG. 3, the first robot 10A and the second robot 10B autonomously move in the environment while estimating their own position by dead reckoning. Then, the first robot 10A, which has approached the imaging range of the marker MA, further estimates its own position by star reckoning.
 例えば、スターレコニングにより自己位置を推定した位置からの移動距離が第1のロボット10Aよりも第2のロボット10Bの方が長かった場合、図3に示すように、第1のロボット10Aが有する誤差円C1と比較して第2のロボット10Bが有する誤差円C2は大きくなり得る。 For example, when the movement distance of the second robot 10B from the position estimated by star reckoning is longer than that of the first robot 10A, as shown in FIG. The error circle C2 possessed by the second robot 10B can be larger than the circle C1.
 以上、説明したように、複数のロボット10は、ある環境内をデッドレコニングにより自己位置を推定しながら移動する。そして、マーカMAの撮影範囲に到達したロボット10は、更に、スターレコニングにより自己位置を推定する。このようにして、ロボット10は、自己位置を随時更新することで、自己位置の推定精度を維持しつつ環境内を移動することが可能である。一方、環境内にマーカを設置できるエリアや数量は限定されている場合がある。 As described above, the plurality of robots 10 move in a certain environment while estimating their own positions by dead reckoning. After reaching the imaging range of the marker MA, the robot 10 further estimates its own position by star reckoning. In this way, the robot 10 can move within the environment while maintaining the estimation accuracy of the self-position by updating the self-position as needed. On the other hand, there are cases where the area and the number of markers that can be placed in the environment are limited.
 そこで、ロボット10は、他のロボット10をスターレコニングのマーカとして用いてもよい。例えば、第2のロボット10Bは、第1のロボット10Aを撮影して得られたセンシング情報に基づき、第1のロボット10Aの位置を推定してもよい。そして、第2のロボット10Bは、推定した第1のロボット10Aの位置情報をサーバ20に送信してもよい。更に、第1のロボット10Aは、デッドレコニングにより推定した自己位置に係る情報をサーバ20に送信してもよい。なお、ロボット10が推定した他のロボット10の位置は、ロボット10がデッドレコニングにより推定した自己位置係る誤差円に加え、スターレコニングに起因する誤差円を含む。 Therefore, the robot 10 may use another robot 10 as a star reckoning marker. For example, the second robot 10B may estimate the position of the first robot 10A based on sensing information obtained by photographing the first robot 10A. The second robot 10B may then transmit the estimated position information of the first robot 10A to the server 20 . Furthermore, the first robot 10A may transmit to the server 20 information related to its own position estimated by dead reckoning. Note that the position of the other robot 10 estimated by the robot 10 includes an error circle resulting from star reckoning in addition to the error circle associated with the self-position estimated by the robot 10 through dead reckoning.
 そして、サーバ20が備える推定部221は、第1のロボット10Aから受信した第1のロボット10Aの自己位置に係る情報と、第2のロボット10Bから受信した第1のロボット10Aの位置情報に基づき、第1のロボット10Aの位置情報を推定する。以下、図4を参照し、第1の実施例に係る推定部221がロボット10の位置情報を推定する具体例を説明する。なお、以下で説明する第1の実施例、第2の実施例および変形例は、それぞれ組み合わせて実行されてもよいし、組み合わせずにいずれか一つのみが実行されてもよい。 Then, the estimating unit 221 included in the server 20 is based on the information related to the self-position of the first robot 10A received from the first robot 10A and the position information of the first robot 10A received from the second robot 10B. , to estimate the position information of the first robot 10A. A specific example of estimating the position information of the robot 10 by the estimation unit 221 according to the first embodiment will be described below with reference to FIG. Note that the first embodiment, the second embodiment, and the modifications described below may be executed in combination with each other, or only one of them may be executed without being combined.
 <2.2.第1の実施例>
 (概要)
 図4は、第1の実施例に係る推定部221がロボット10の位置情報を推定する概要を説明するための説明図である。図4において、第1のロボット10Aが推定した自己位置は、ガウス分布の誤差円C1を有する。また、第2のロボット10Bがスターレコニングにより推定した第1のロボット10Aの位置は、ガウス分布の誤差円C3を有する。
<2.2. First embodiment>
(overview)
FIG. 4 is an explanatory diagram for explaining an overview of how the estimation unit 221 according to the first embodiment estimates the position information of the robot 10. As shown in FIG. In FIG. 4, the self-position estimated by the first robot 10A has an error circle C1 of Gaussian distribution. The position of the first robot 10A estimated by the second robot 10B by star reckoning has an error circle C3 of Gaussian distribution.
 推定部221は、例えば、第1のロボット10A及び第2のロボット10Bから受信した第1のロボット10Aの位置情報の各ガウス分布を重ね合わせることにより、第1のロボット10Aの位置情報を推定してもよい。ガウス分布の重ね合わせにより推定された第1のロボット10Aの位置情報は、誤差円C4を有する。図4に示すように、ガウス分布の重ね合わせにより推定された第1のロボット10Aの位置情報が有する誤差円C4は、誤差円C1および誤差円C3と比較して小さくなり得る。 The estimation unit 221 estimates the position information of the first robot 10A by, for example, superimposing each Gaussian distribution of the position information of the first robot 10A received from the first robot 10A and the second robot 10B. may The position information of the first robot 10A estimated by superposition of Gaussian distributions has an error circle C4. As shown in FIG. 4, the error circle C4 of the position information of the first robot 10A estimated by superimposing Gaussian distributions can be smaller than the error circles C1 and C3.
 また、第2のロボットは、スターレコニングに際して、第2のロボット10Bが備えるカメラ110により第1のロボット10Aの画像を取得し、当該画像から第1のロボット10Aの位置情報を推定する。更に、第2のロボット10Bは、第1のロボット10Aの位置情報の推定に用いた画像を撮影した時刻を保持する。そして、第2のロボット10Bは、第1のロボット10Aの位置情報及び撮影した時刻に関する情報をサーバ20に送信する。 Also, during star reckoning, the second robot acquires an image of the first robot 10A with the camera 110 of the second robot 10B, and estimates the position information of the first robot 10A from the image. Further, the second robot 10B holds the time when the image used for estimating the position information of the first robot 10A was taken. Then, the second robot 10B transmits to the server 20 the position information of the first robot 10A and the information about the shooting time.
 この際に、通信部150は、例えば、ミリ秒単位の時刻と、パリティビットまたはチェックサムをLED通信により送信してもよい。また、通信部150は、第1のロボット10Aの位置情報をWiFi(登録商標)等の他の通信規格により送信してもよい。 At this time, the communication unit 150 may transmit, for example, the time in milliseconds and the parity bit or checksum through LED communication. Further, the communication unit 150 may transmit the position information of the first robot 10A using another communication standard such as WiFi (registered trademark).
 そして、サーバ20が備える推定部221は、第2のロボットがスターレコニングにより推定した第1のロボット10Aのガウス分布と、第2のロボット10Bが第1のロボット10Aを撮影した撮影時刻と同時刻において第1のロボット10Aがデッドレコニングにより推定した自己位置のガウス分布との重ね合わせにより、第1のロボット10Aの位置情報を推定してもよい。 Then, the estimation unit 221 included in the server 20 stores the Gaussian distribution of the first robot 10A estimated by the second robot by star reckoning and The position information of the first robot 10A may be estimated by superimposing the Gaussian distribution of the self-position estimated by the first robot 10A by dead reckoning.
 これにより、ガウス分布の重ね合わせにより推定された第1のロボット10Aの位置情報の誤差円C4は、第1のロボット10Aが認識していた自己位置の誤差円C1と比較して小さくなり得る。 As a result, the error circle C4 of the position information of the first robot 10A estimated by superimposing the Gaussian distributions can be smaller than the error circle C1 of the self-position recognized by the first robot 10A.
 なお、第2のロボット10Bは、推定した第1のロボット10Aの位置情報をサーバ20に送信し、第1のロボット10Aを撮影した時刻を第1のロボット10Aに送信してもよい。この場合、第1のロボット10Aは、第2のロボット10Bにより撮影された時刻と同時刻においてデッドレコニングにより推定した自己位置に係る情報をサーバ20に送信してもよい。 The second robot 10B may transmit the estimated position information of the first robot 10A to the server 20, and may transmit the time when the first robot 10A was photographed to the first robot 10A. In this case, the first robot 10A may transmit to the server 20 information related to its own position estimated by dead reckoning at the same time as when the image was taken by the second robot 10B.
 また、推定部221は、上述した方法と同様に、第1のロボット10A及び第2のロボット10Bから受信した第2のロボット10Bの位置情報のガウス分布を重ね合わせることにより、第2のロボット10Bの位置情報を推定してもよい。例えば、推定部221は、第1のロボット10Aがスターレコニングにより推定した第2のロボット10Bの位置情報が有するガウス分布と、第2のロボット10Bがデッドレコニングにより推定した自己位置に係る情報が有するガウス分布とを重ね合わせることにより第2のロボット10Bの位置情報を推定してもよい。これにより、推定部221は、第1のロボット10Aおよび第2のロボット10Bの双方が認識している自己位置に係る情報が有する誤差円を小さくし得る。 In addition, the estimation unit 221 superimposes the Gaussian distribution of the position information of the second robot 10B received from the first robot 10A and the second robot 10B in the same manner as the method described above, thereby obtaining the position information of the second robot 10B. location information may be estimated. For example, the estimation unit 221 has a Gaussian distribution included in the position information of the second robot 10B estimated by the first robot 10A by star reckoning, and the information related to the self-position estimated by the second robot 10B by dead reckoning. The position information of the second robot 10B may be estimated by superimposing the Gaussian distribution. As a result, the estimating unit 221 can reduce the error circle included in the self-position information recognized by both the first robot 10A and the second robot 10B.
 以上、第1の実施例に係る推定部221がロボット10の位置情報を推定する概要を説明した。続いて、図5を参照し、第1の実施例に係る確率分布の重ね合わせによるロボット10の位置情報を推定する動作処理の一例を説明する。 The outline of how the estimation unit 221 according to the first embodiment estimates the position information of the robot 10 has been described above. Next, an example of operation processing for estimating position information of the robot 10 by superimposing probability distributions according to the first embodiment will be described with reference to FIG.
 (動作処理例)
 図5は、第1の実施例に係る確率分布の重ね合わせによるロボット10の位置情報を推定する動作処理の一例を説明するための説明図である。まず、第1のロボット10Aは、デッドレコニングにより自己位置を推定しながら環境内を移動する(S101)。
(Example of operation processing)
FIG. 5 is an explanatory diagram for explaining an example of operation processing for estimating position information of the robot 10 by superimposing probability distributions according to the first embodiment. First, the first robot 10A moves in the environment while estimating its own position by dead reckoning (S101).
 また、第2のロボット10Bは、第1のロボット10Aと同様、デッドレコニングにより自己位置を推定しながら環境内を移動する(S105)。 Also, like the first robot 10A, the second robot 10B moves in the environment while estimating its own position by dead reckoning (S105).
 続いて、第1のロボット10Aと第2のロボット10Bが所定の距離内になった際に、第1のロボット10Aは、第2のロボット10Bを撮影する(S109)。ここで、所定の距離とは、第1のロボット10Aが第2のロボット10Bを撮影することが可能な任意の距離であってもよい。 Subsequently, when the first robot 10A and the second robot 10B are within a predetermined distance, the first robot 10A takes an image of the second robot 10B (S109). Here, the predetermined distance may be any distance at which the first robot 10A can photograph the second robot 10B.
 また、第2のロボット10Bは、第1のロボット10Aと同様、第1のロボット10Aと第2のロボット10Bが所定の距離内になった際に、第1のロボット10Aを撮影する(S113)。 Similarly to the first robot 10A, the second robot 10B takes an image of the first robot 10A when the first robot 10A and the second robot 10B are within a predetermined distance (S113). .
 続いて、第1のロボット10Aは、第2のロボット10Bの撮影により得られたセンシング情報に基づき、スターレコニングにより、第2のロボット10Bの位置情報を推定する(S117)。 Subsequently, the first robot 10A estimates position information of the second robot 10B by star reckoning based on sensing information obtained by photographing the second robot 10B (S117).
 また、第2のロボット10Bは、第1のロボット10Aの撮影により得られたセンシング情報に基づき、スターレコニングにより、第1のロボット10Aの位置情報を推定する(S121)。 Also, the second robot 10B estimates position information of the first robot 10A by star reckoning based on sensing information obtained by photographing the first robot 10A (S121).
 続いて、第1のロボット10Aは、自己位置に係る情報と、スターレコニングにより推定した第2のロボット10Bの位置情報をサーバ20に送信する。また、第1のロボット10Aは、第2のロボット10Bの位置情報の推定に用いた画像の撮影時刻に関する情報をサーバ20に送信する(S125)。 Subsequently, the first robot 10A transmits to the server 20 the information related to its own position and the position information of the second robot 10B estimated by star reckoning. In addition, the first robot 10A transmits to the server 20 the information about the shooting time of the image used for estimating the position information of the second robot 10B (S125).
 続いて、第2のロボット10Bは、自己位置に係る情報と、スターレコニングにより推定した第1のロボット10Aの位置情報をサーバ20に送信する。また、第2のロボット10Bは、第1のロボット10Aの位置情報の推定に用いた画像の撮影時刻に関する情報をサーバ20に送信する(S129)。 Subsequently, the second robot 10B transmits to the server 20 the information related to its own position and the position information of the first robot 10A estimated by star reckoning. In addition, the second robot 10B transmits to the server 20 the information about the shooting time of the image used for estimating the position information of the first robot 10A (S129).
 そして、サーバ20が備える推定部221は、第2のロボット10Bが推定した第1のロボット10Aの位置情報が有する確率分布と、第1のロボット10Aから受信した自己位置が有する確率分布を重ね合わせることにより、第1のロボット10Aの位置情報を推定する(S133)。ここで、確率分布を重ね合わせる第1のロボット10Aの自己位置は、第2のロボット10Bが第1のロボット10Aの位置情報の推定に用いた画像の撮影時刻と同時刻に第1のロボット10Aにより推定された自己位置であることが望ましい。 Then, the estimation unit 221 included in the server 20 superimposes the probability distribution of the position information of the first robot 10A estimated by the second robot 10B and the probability distribution of the self-position received from the first robot 10A. Thus, the position information of the first robot 10A is estimated (S133). Here, the self-position of the first robot 10A on which the probability distribution is superimposed is obtained by the first robot 10A at the same time as the photographing time of the image used by the second robot 10B for estimating the position information of the first robot 10A. It is desirable that the self-position is estimated by
 続いて、サーバ20が備える推定部221は、第1のロボット10Aが推定した第2のロボット10Bの位置情報が有する確率分布と、第2のロボット10Bから受信した自己位置が有する確率分布を重ね合わせることにより、第2のロボット10Bの位置情報を推定する(S137)。ここで、確率分布を重ね合わせる第2のロボット10Bの自己位置は、第1のロボット10Aが第2のロボット10Bの位置情報の推定に用いた画像の撮影時刻と同時刻に第2のロボット10Bにより推定された自己位置であることが望ましい。 Subsequently, the estimation unit 221 included in the server 20 overlaps the probability distribution of the position information of the second robot 10B estimated by the first robot 10A and the probability distribution of the self-position received from the second robot 10B. By matching, the position information of the second robot 10B is estimated (S137). Here, the self-position of the second robot 10B on which the probability distribution is superimposed is the second robot 10B at the same time as the time when the image used by the first robot 10A for estimating the position information of the second robot 10B was captured. It is desirable that the self-position is estimated by
 そして、サーバ20が備える補正部225は、推定部221により推定された第1のロボット10Aの位置情報に関する補正情報と、第2のロボット10Bの位置情報に関する補正情報とを生成する(S141)。 Then, the correction unit 225 included in the server 20 generates correction information regarding the position information of the first robot 10A estimated by the estimation unit 221 and correction information regarding the position information of the second robot 10B (S141).
 そして、サーバ20が備える通信部210は、生成した第1のロボット10Aの位置情報に関する補正情報を第1のロボット10Aに送信する(S145)。 Then, the communication unit 210 included in the server 20 transmits the generated correction information regarding the position information of the first robot 10A to the first robot 10A (S145).
 また、サーバ20が備える通信部210は、生成した第2のロボット10Bの位置情報に関する補正情報を第2のロボット10Bに送信する(S149)。 Further, the communication unit 210 included in the server 20 transmits the generated correction information regarding the position information of the second robot 10B to the second robot 10B (S149).
 続いて、第1のロボット10Aは、サーバ20から受信した補正情報に基づき、第1のロボット10Aの自己位置を補正する(S153)。 Subsequently, the first robot 10A corrects its own position based on the correction information received from the server 20 (S153).
 また、第2のロボット10Bは、サーバ20から受信した補正情報に基づき、第2のロボット10Bの自己位置を補正し(S157)、本開示に係る情報処理システムは、処理を終了する。 Also, the second robot 10B corrects the self-position of the second robot 10B based on the correction information received from the server 20 (S157), and the information processing system according to the present disclosure ends the processing.
 以上、第1の実施例に係る確率分布の重ね合わせによるロボット10の位置情報を推定する動作処理の一例を説明した。以上、説明した第1の実施例によれば、本開示に係るサーバ20は、任意のロボット10の位置情報をより高い精度で推定することが可能になり、更に、複数のロボット10間で互いに位置情報を推定することで、複数のロボット10の位置情報を補正することが可能である。続いて、図6を参照し、第2の実施例に係る補正部225がロボット10の位置情報を補正する具体例を説明する。 An example of operation processing for estimating position information of the robot 10 by superimposing probability distributions according to the first embodiment has been described above. According to the first embodiment described above, the server 20 according to the present disclosure can estimate the position information of any robot 10 with higher accuracy. By estimating the position information, it is possible to correct the position information of the plurality of robots 10 . Next, a specific example of correcting the position information of the robot 10 by the correction unit 225 according to the second embodiment will be described with reference to FIG.
 <2.3.第2の実施例>
 (概要)
 図6は、第2の実施例に係るロボット10の位置情報を補正する一例を説明するための説明図である。上述したように、ロボット10は、図3に示したようなマーカMAを撮影し、スターレコニングを行うことで、ロボット10の自己位置をより高い精度で推定することが可能である。
<2.3. Second embodiment>
(overview)
FIG. 6 is an explanatory diagram for explaining an example of correcting the position information of the robot 10 according to the second embodiment. As described above, the robot 10 can estimate the self-position of the robot 10 with higher accuracy by photographing the marker MA as shown in FIG. 3 and performing star reckoning.
 そして、ロボット10は、認識している自己位置をデッドレコニングにより推定していた自己位置から、スターレコニングにより推定された自己位置に補正することで、より精度の高い自己位置を認識することが可能になり得る。 Then, the robot 10 corrects the recognized self-position from the self-position estimated by dead reckoning to the self-position estimated by star reckoning, thereby recognizing the self-position with higher accuracy. can be
 一方、ロボット10が認識する自己位置が急激に補正されることにより、併せて変更された目標経路に復帰する必要が生じるため、ロボット10の動作が不安定になる可能性がある。 On the other hand, the rapid correction of the self-position recognized by the robot 10 makes it necessary to return to the changed target path, which may make the operation of the robot 10 unstable.
 そこで、本開示に係る補正部225は、ロボット10の位置情報を連続的に補正してもよい。例えば、補正部225は、ロボット10の位置情報を当該ロボット10の移動距離に応じて連続的に補正してもよい。 Therefore, the correction unit 225 according to the present disclosure may continuously correct the position information of the robot 10 . For example, the correction unit 225 may continuously correct the position information of the robot 10 according to the movement distance of the robot 10 .
 例えば、ロボット10は、デッドレコニングにより自己位置を推定しながら移動する。ここで、デッドレコニングにより推定されたロボット10の位置を自己位置SL1とする。デッドレコニングによるロボット10の自己位置の推定では、上述したようにロボット10の移動距離に応じて推定誤差が大きくなり得る。 For example, the robot 10 moves while estimating its own position by dead reckoning. Here, the position of the robot 10 estimated by dead reckoning is assumed to be a self-position SL1. In estimating the self-position of the robot 10 by dead reckoning, an estimation error may increase according to the moving distance of the robot 10 as described above.
 ここで、ロボット10の実際の位置を実際位置RL1とすると、図6に示すように、ロボット10の移動距離が長くなるにつれて、ロボット10により推定された自己位置SL1と実際位置RL1との乖離が大きくなり得る。 Here, assuming that the actual position of the robot 10 is the actual position RL1, as shown in FIG. can grow.
 そこで、環境内に設置されたマーカを撮影することが可能な距離に近づいたロボット10は、カメラ110によりマーカの画像を取得する。次に、推定部161は、取得された画像に基づき、スターレコニングによりロボット10の自己位置を推定する。ここで、マーカの撮影時においてロボット10がデッドレコニングにより認識していた自己位置を自己位置STとし、スターレコニングにより推定された自己位置を推定位置STLとする。 Therefore, when the robot 10 approaches a distance at which the marker installed in the environment can be photographed, the camera 110 acquires an image of the marker. Next, the estimation unit 161 estimates the self-position of the robot 10 by star reckoning based on the acquired image. Here, the self position recognized by the robot 10 by dead reckoning at the time of photographing the marker is assumed to be self position ST, and the self position estimated by star reckoning is assumed to be estimated position STL.
 そして、推定部221は、ロボット10の移動に際して、デッドレコニングによるロボット10の自己位置の推定を継続して行う。この際に、スターレコニングにより推定された位置に補正した場合のロボット10の補正後位置CL1と、スターレコニングにより推定された位置に補正しなかった場合のロボット10の未補正位置SL2と、を常時推定する。 Then, the estimation unit 221 continues to estimate the self-position of the robot 10 by dead reckoning when the robot 10 moves. At this time, the corrected position CL1 of the robot 10 when corrected to the position estimated by star reckoning and the uncorrected position SL2 of the robot 10 when not corrected to the position estimated by star reckoning are always kept. presume.
 そして、補正部225は、補正後位置CL1と未補正位置SL2に基づき、ロボット10の移動距離に応じて連続的にロボット10の位置情報を補正してもよい。例えば、補正部225は、補正後位置CL1および未補正位置SL2を常時算出し、例えば、拡張カルマンフィルタでの使用割合をロボット10の移動距離に応じて変化させてもよい。 Then, the correction unit 225 may continuously correct the position information of the robot 10 according to the movement distance of the robot 10 based on the corrected position CL1 and the uncorrected position SL2. For example, the correction unit 225 may constantly calculate the corrected position CL1 and the uncorrected position SL2, and change the usage ratio in the extended Kalman filter according to the movement distance of the robot 10, for example.
 例えば、あるマーカを撮影してスターレコニングにより自己位置を推定したロボット10の補正部225は、次にマーカが設置されているエリアに到達するまでに、スターレコニングで補正した場合の補正後位置CL1の使用割合が0%から100%になるように連続的に補正してもよい。これにより、ロボット10の認識する自己位置の急激な変化を抑制することが可能になる。なお、補正部225により連続的に補正されるロボット10の位置を補正位置CL2とする。 For example, the correction unit 225 of the robot 10, which has captured a certain marker and estimated its own position by star reckoning, will obtain a post-correction position CL1 after correction by star reckoning until it reaches the area where the marker is installed next time. may be continuously corrected so that the usage ratio of is from 0% to 100%. This makes it possible to suppress sudden changes in the self-position recognized by the robot 10 . The position of the robot 10 that is continuously corrected by the correction unit 225 is defined as a corrected position CL2.
 また、以上で説明したロボット10の認識する自己位置の補正によれば、ロボット10の実際の位置である実際位置RL2と、補正位置CL2との乖離を抑制するとともに、ロボット10の急激な自己位置の変化を抑制することで、ロボット10の動作が不安定になる可能性を低減し得る。 Further, according to the above-described correction of the self-position recognized by the robot 10, the deviation between the actual position RL2, which is the actual position of the robot 10, and the corrected position CL2 is suppressed, and the rapid self-position of the robot 10 is corrected. can reduce the possibility that the operation of the robot 10 becomes unstable.
 また、補正部225は、ロボット10が認識している自己位置から、第1の実施例に係る推定部221がガウス分布の重ね合わせにより推定したロボット10の位置に連続的に補正してもよい。 Further, the correction unit 225 may continuously correct the self-position recognized by the robot 10 to the position of the robot 10 estimated by the estimation unit 221 according to the first embodiment by superimposing Gaussian distributions. .
 (動作処理)
 図7は、第2の実施例に係るロボット10の位置情報を補正する動作処理の一例を説明するための説明図である。まず、ロボット10は、デッドレコニングにより自己位置を推定する(S201)。
(operation processing)
FIG. 7 is an explanatory diagram for explaining an example of operation processing for correcting the position information of the robot 10 according to the second embodiment. First, the robot 10 estimates its own position by dead reckoning (S201).
 続いて、ロボット10は、推定された自己位置に関する情報をサーバ20に送信する(S205)。なお、ロボット10は、デッドレコニングにより推定された自己位置に関する情報をサーバ20に常時送信してもよい。 Subsequently, the robot 10 transmits information on the estimated self-location to the server 20 (S205). In addition, the robot 10 may always transmit to the server 20 information about its own position estimated by dead reckoning.
 そして、ロボット10が備えるカメラ110は、マーカを撮影し、画像を取得する(S209)。 Then, the camera 110 provided in the robot 10 takes a picture of the marker and acquires an image (S209).
 続いて、ロボット10は、取得した画像をサーバ20に送信する(S213)。 Subsequently, the robot 10 transmits the acquired image to the server 20 (S213).
 そして、サーバ20が備える推定部221は、ロボット10から取得した自己位置に関する情報および画像に基づき、ロボット10の位置情報を推定する(S217)。 Then, the estimation unit 221 provided in the server 20 estimates the position information of the robot 10 based on the information and the image regarding the self-position acquired from the robot 10 (S217).
 そして、サーバ20は、ロボット10の位置情報を補正する補正情報を生成し(S221)、当該補正情報をロボット10に送信する(S225)。ここで、補正情報とは、上述した連続的にロボット10の自己位置を補正する情報を含む。 Then, the server 20 generates correction information for correcting the position information of the robot 10 (S221), and transmits the correction information to the robot 10 (S225). Here, the correction information includes information for continuously correcting the self-position of the robot 10 as described above.
 そして、ロボット10は、補正情報およびロボット10の移動距離に応じて、ロボット10の自己位置を補正し(S229)、本開示に係る情報処理システムの処理を終了する。 Then, the robot 10 corrects the self-position of the robot 10 according to the correction information and the movement distance of the robot 10 (S229), and ends the processing of the information processing system according to the present disclosure.
 以上、第2の実施例に係るロボット10の位置情報の補正に係る動作処理の一例を説明した。続いて、図8を参照し、本開示に係るロボット10の変形例について説明する。 An example of operation processing related to correction of position information of the robot 10 according to the second embodiment has been described above. Next, a modified example of the robot 10 according to the present disclosure will be described with reference to FIG. 8 .
 <<3.変形例>>
 図8は、本開示に係るロボット10の変形例について説明するための説明図である。本開示に係るロボット10は、車輪の回生エネルギーを用いて動作可能な低消費電力のマイクロコントローラ(MCU:Microcontroller Unit)MCを備えてもよい。そして、マイクロコントローラMCは、制御部160の一部または全ての機能を実現してもよい。
<<3. Modification>>
FIG. 8 is an explanatory diagram for describing a modification of the robot 10 according to the present disclosure. The robot 10 according to the present disclosure may include a low power consumption microcontroller (MCU: Microcontroller Unit) MC that can operate using regenerative energy of the wheels. The microcontroller MC may implement some or all of the functions of the controller 160 .
 例えば、ロボット10のメイン電源がOFFの際に、モータドライバ(DRV)の電源(Vcc)からの回生電力をマイクロコントローラMCに入力されてもよい。そして、マイクロコントローラMCは、IMU120および車輪エンコーダ130の各センシング情報を入力し、デッドレコニングによるロボット10の自己位置の推定をメイン電源がOFFの間も継続的に行ってもよい。 For example, when the main power supply of the robot 10 is off, regenerated power from the power supply (Vcc) of the motor driver (DRV) may be input to the microcontroller MC. The microcontroller MC may input sensing information from the IMU 120 and the wheel encoder 130 and continuously estimate the self-position of the robot 10 by dead reckoning even while the main power is off.
 この結果、メイン電源がOFFの際において、外力(例えば、ユーザの手作業)によりロボット10の位置が移動した場合であっても、ロボット10は自己位置を推定することが可能になる。 As a result, the robot 10 can estimate its own position even if the position of the robot 10 is moved by an external force (for example, manual work by the user) when the main power is off.
 また、ロボット10のメイン電源がONになった際に、マップ上の全ての位置でのマッチングを行う必要をなくし、スターレコニングが収束し易くなり得る。また、メイン電源をONにした際に、ユーザによる指定操作を省略することが可能になり得る。以上、本開示に係るロボット10の変形例を説明した。 Also, when the main power supply of the robot 10 is turned on, it is possible to eliminate the need to perform matching at all positions on the map, making it easier for star recognition to converge. Also, when the main power is turned on, it may be possible to omit the specifying operation by the user. As above, the modified example of the robot 10 according to the present disclosure has been described.
 <<4.ハードウェア構成例>>
 以上、本開示に係る実施形態を説明した。上述した情報処理は、ソフトウェアと、以下に説明するサーバ20のハードウェアとの協働により実現される。なお、以下に説明するハードウェア構成は、ロボット10にも適用可能である。
<<4. Hardware configuration example >>
The embodiment according to the present disclosure has been described above. The information processing described above is realized by cooperation between software and hardware of the server 20 described below. Note that the hardware configuration described below can also be applied to the robot 10 .
 図9は、サーバ20のハードウェア構成を示したブロック図である。サーバ20は、CPU(Central Processing Unit)2001と、ROM(Read Only Memory)2002と、RAM(Random Access Memory)2003と、ホストバス2004と、を備える。また、サーバ20は、ブリッジ2005と、外部バス2006と、インタフェース2007と、入力装置2008と、出力装置2010と、ストレージ装置(HDD)2011と、ドライブ2012と、通信装置2015とを備える。 FIG. 9 is a block diagram showing the hardware configuration of the server 20. As shown in FIG. The server 20 comprises a CPU (Central Processing Unit) 2001 , a ROM (Read Only Memory) 2002 , a RAM (Random Access Memory) 2003 and a host bus 2004 . The server 20 also includes a bridge 2005 , an external bus 2006 , an interface 2007 , an input device 2008 , an output device 2010 , a storage device (HDD) 2011 , a drive 2012 and a communication device 2015 .
 CPU2001は、演算処理装置および制御装置として機能し、各種プログラムに従ってサーバ20内の動作全般を制御する。また、CPU2001は、マイクロプロセッサであってもよい。ROM2002は、CPU2001が使用するプログラムや演算パラメータ等を記憶する。RAM2003は、CPU2001の実行において使用するプログラムや、その実行において適宜変化するパラメータ等を一時記憶する。これらはCPUバスなどから構成されるホストバス2004により相互に接続されている。CPU2001、ROM2002およびRAM2003とソフトウェアとの協働により、図2を参照して説明した推定部221や補正部225などの機能が実現され得る。 The CPU 2001 functions as an arithmetic processing device and a control device, and controls overall operations within the server 20 according to various programs. Also, the CPU 2001 may be a microprocessor. A ROM 2002 stores programs, calculation parameters, and the like used by the CPU 2001 . A RAM 2003 temporarily stores programs used in the execution of the CPU 2001, parameters that change as appropriate during the execution, and the like. These are interconnected by a host bus 2004 comprising a CPU bus or the like. Functions such as the estimation unit 221 and the correction unit 225 described with reference to FIG.
 ホストバス2004は、ブリッジ2005を介して、PCI(Peripheral Component Interconnect/Interface)バスなどの外部バス2006に接続されている。なお、必ずしもホストバス2004、ブリッジ2005および外部バス2006を分離構成する必要はなく、1つのバスにこれらの機能を実装してもよい。 The host bus 2004 is connected via a bridge 2005 to an external bus 2006 such as a PCI (Peripheral Component Interconnect/Interface) bus. It should be noted that the host bus 2004, bridge 2005 and external bus 2006 do not necessarily have to be configured separately, and these functions may be implemented in one bus.
 入力装置2008は、マウス、キーボード、タッチパネル、ボタン、マイクロフォン、スイッチおよびレバーなどユーザが情報を入力するための入力手段と、ユーザによる入力に基づいて入力信号を生成し、CPU2001に出力する入力制御回路などから構成されている。サーバ20のユーザは、該入力装置2008を操作することにより、サーバ20に対して各種のデータを入力したり処理動作を指示したりすることができる。 The input device 2008 includes input means for the user to input information, such as a mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input control circuit that generates an input signal based on the user's input and outputs it to the CPU 2001 . etc. By operating the input device 2008, the user of the server 20 can input various data to the server 20 and instruct processing operations.
 出力装置2010は、例えば、液晶ディスプレイ装置、OLED装置およびランプなどの表示装置を含む。さらに、出力装置2010は、スピーカおよびヘッドホンなどの音声出力装置を含む。出力装置2010は、例えば、再生されたコンテンツを出力する。具体的には、表示装置は再生された映像データ等の各種情報をテキストまたはイメージで表示する。一方、音声出力装置は、再生された音声データ等を音声に変換して出力する。 The output device 2010 includes display devices such as liquid crystal display devices, OLED devices, and lamps, for example. Further, output device 2010 includes audio output devices such as speakers and headphones. The output device 2010 outputs reproduced content, for example. Specifically, the display device displays various information such as reproduced video data as text or images. On the other hand, the audio output device converts reproduced audio data and the like into audio and outputs the audio.
 ストレージ装置2011は、データ格納用の装置である。ストレージ装置2011は、記憶媒体、記憶媒体にデータを記録する記録装置、記憶媒体からデータを読み出す読出し装置および記憶媒体に記録されたデータを削除する削除装置などを含んでもよい。ストレージ装置2011は、例えば、HDD(Hard Disk Drive)で構成される。このストレージ装置2011は、ハードディスクを駆動し、CPU2001が実行するプログラムや各種データを格納する。 The storage device 2011 is a device for storing data. The storage device 2011 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like. The storage device 2011 is composed of, for example, an HDD (Hard Disk Drive). The storage device 2011 drives a hard disk and stores programs executed by the CPU 2001 and various data.
 ドライブ2012は、記憶媒体用リーダライタであり、サーバ20に内蔵、あるいは外付けされる。ドライブ2012は、装着されている磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリ等のリムーバブル記憶媒体2018に記録されている情報を読み出して、RAM2003に出力する。また、ドライブ2012は、リムーバブル記憶媒体3018に情報を書き込むこともできる。 The drive 2012 is a reader/writer for storage media, and is built in or externally attached to the server 20 . The drive 2012 reads out information recorded on a removable storage medium 2018 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 2003 . Drive 2012 can also write information to removable storage media 3018 .
 通信装置2015は、例えば、ネットワーク1に接続するための通信デバイス等で構成された通信インタフェースである。また、通信装置2015は、無線LAN対応通信装置であっても、LTE(Long Term Evolution)対応通信装置であっても、有線による通信を行うワイヤー通信装置であってもよい。 The communication device 2015 is, for example, a communication interface configured with a communication device or the like for connecting to the network 1. Also, the communication device 2015 may be a wireless LAN compatible communication device, an LTE (Long Term Evolution) compatible communication device, or a wire communication device that performs wired communication.
 <<5.補足>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示はかかる例に限定されない。本開示の属する技術の分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<<5. Supplement >>
Although the preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, the present disclosure is not limited to such examples. It is clear that a person having ordinary knowledge in the technical field to which the present disclosure belongs can conceive of various modifications or modifications within the scope of the technical idea described in the claims. It is understood that these also naturally belong to the technical scope of the present disclosure.
 例えば、ロボット10は、情報処理装置の一例として、本明細書で説明したサーバ20の各種機能を有する構成を備えていてもよい。例えば、サーバ20が備える推定部221の機能は、ロボット10が備える推定部161により実現されてもよいし、ロボット10は、補正部225の機能を有する構成を更に備えてもよい。 For example, the robot 10, as an example of an information processing device, may have a configuration having various functions of the server 20 described in this specification. For example, the function of the estimation unit 221 provided in the server 20 may be realized by the estimation unit 161 provided in the robot 10 , and the robot 10 may further include a configuration having the function of the correction unit 225 .
 また、第1の実施例に係るサーバ20は、第1のロボット10Aおよび第2のロボット10Bの各々がスターレコニングにより推定した第3のロボットの位置情報を取得してもよい。そして、推定部221は、取得した2つの第3のロボットの位置情報のガウス分布を重ね合わせることにより、第3のロボットの位置を推定してもよい。 Also, the server 20 according to the first embodiment may acquire position information of the third robot estimated by each of the first robot 10A and the second robot 10B by star reckoning. Then, the estimating unit 221 may estimate the position of the third robot by superimposing the two acquired Gaussian distributions of the position information of the third robot.
 また、本明細書の情報処理システムの処理における各ステップは、必ずしもフローチャートとして記載された順序に沿って時系列に処理する必要はない。例えば、情報処理システムの処理における各ステップは、フローチャートとして記載した順序と異なる順序や並列的に処理されてもよい。 Also, each step in the processing of the information processing system in this specification does not necessarily have to be processed in chronological order according to the order described as the flowchart. For example, each step in the processing of the information processing system may be processed in an order different from the order described as the flowchart or in parallel.
 また、ロボット10およびサーバ20に内蔵されるCPU、ROMおよびRAMなどのハードウェアに、上述したロボット10およびサーバ20の各構成と同等の機能を発揮させるためのコンピュータプログラムも作成可能である。また、当該コンピュータプログラムを記憶させた非一時的な記憶媒体も提供される。 It is also possible to create a computer program that causes hardware such as the CPU, ROM, and RAM built into the robot 10 and the server 20 to exhibit functions equivalent to the respective configurations of the robot 10 and the server 20 described above. A non-transitory storage medium storing the computer program is also provided.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 Also, the effects described in this specification are merely descriptive or exemplary, and are not limiting. In other words, the technology according to the present disclosure can produce other effects that are obvious to those skilled in the art from the description of this specification, in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得する取得部と、
 前記取得部により取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する推定部と、
を備える、情報処理装置。
(2)
 前記任意の移動体は、前記複数の移動体に含まれ、
 前記取得部は、
 前記任意の移動体の自己位置に係る情報と、前記複数の移動体のうち前記任意の移動体と異なる他の移動体により推定された前記任意の移動体の位置情報を取得する、
前記(1)に記載の情報処理装置。
(3)
 前記取得部は、
 前記他の移動体が前記任意の移動体の位置情報を取得した時刻に関する情報を取得する、
前記(2)に記載の情報処理装置。
(4)
 前記推定部は、
 前記他の移動体により推定された前記任意の移動体の位置情報と、前記他の移動体が前記任意の移動体の位置情報を取得した時刻において取得された前記任意の移動体の自己位置に係る情報との確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する、
前記(2)又は前記(3)に記載の情報処理装置。
(5)
 前記取得部は、
 前記任意の移動体に搭載される内界センサにより得られたセンシング情報に基づく前記任意の移動体の自己位置に係る情報を取得する、
前記(1)から前記(4)までのうちいずれか一項に記載の情報処理装置。
(6)
 前記取得部は、
 前記他の移動体に搭載される外界センサにより得られたセンシング情報に基づき推定された前記任意の移動体の位置情報を取得する、
前記(5)に記載の情報処理装置。
(7)
 前記取得部は、
 前記任意の移動体と、前記他の移動体が所定の距離内になった際に前記他の移動体に搭載される前記外界センサにより得られたセンシング情報に基づき推定された前記任意の移動体の位置情報を取得する、
前記(6)に記載の情報処理装置。
(8)
 前記任意の移動体の位置情報を前記推定部により推定された位置情報に補正する補正部、
を更に備える、前記(1)から前記(7)までのうちいずれか一項に記載の情報処理装置。
(9)
 前記取得部は、
 前記任意の移動体に搭載される外界センサが予め定められたエリアに設置された地物を検出することで得られた前記任意の移動体の位置情報を取得し、
 前記補正部は、
 前記任意の移動体の位置情報を前記外界センサが前記地物を検出することで得られた前記任意の移動体の位置情報に補正する、 
前記(8)に記載の情報処理装置。
(10)
 前記補正部は、
 前記任意の移動体の位置情報を連続的に補正する、
前記(9)に記載の情報処理装置。
(11)
 前記補正部は、
 前記任意の移動体の位置情報を前記任意の移動体の移動距離に応じて連続的に補正する、
前記(10)に記載の情報処理装置。
(12)
 前記推定部は、
 前記任意の移動体の移動時において、前記任意の移動体の位置情報を補正しなかった場合の位置情報と、前記任意の移動体の位置情報を補正した場合の位置情報とを推定し、
 前記補正部は、
 前記推定部により推定された前記任意の移動体の位置情報を補正しなかった場合の位置情報と、前記任意の移動体の位置情報を補正した場合の位置情報とに基づき、前記任意の移動体の位置情報を補正する、
前記(11)に記載の情報処理装置。
(13)
 前記補正部は、
 前記地物が設置されたエリアから次の地物が設置されるエリアに前記任意の移動体が到達するまでに、前記任意の移動体の位置情報を連続的に補正する、
前記(12)に記載の情報処理装置。
(14)
 前記複数の移動体は、車輪を有し、前記車輪の回転による回生エネルギーを用いて自己位置が常時推定される、
前記(1)から前記(13)までのうちいずれか一項に記載の情報処理装置。
(15)
 複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得することと、
 取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定することと、
を含む、コンピュータにより実行される情報処理方法。
(16)
 コンピュータに、
 複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得する取得機能と、
 前記取得機能により取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する推定機能と、
を実現させる、プログラム。
Note that the following configuration also belongs to the technical scope of the present disclosure.
(1)
an acquisition unit that acquires a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of the plurality of moving objects;
an estimating unit that estimates the position information of the arbitrary moving object by superimposing the probability distributions of the plurality of position information of the arbitrary moving object acquired by the acquiring unit;
An information processing device.
(2)
the arbitrary moving body is included in the plurality of moving bodies,
The acquisition unit
Acquiring information related to the self-position of the arbitrary mobile body and position information of the arbitrary mobile body estimated by another mobile body different from the arbitrary mobile body among the plurality of mobile bodies,
The information processing device according to (1) above.
(3)
The acquisition unit
Acquiring information about the time when the other mobile body acquired the position information of the arbitrary mobile body;
The information processing device according to (2) above.
(4)
The estimation unit
The position information of the arbitrary mobile body estimated by the other mobile body and the self-position of the arbitrary mobile body acquired at the time when the other mobile body acquired the position information of the arbitrary mobile body estimating the position information of the arbitrary moving object by superimposing the probability distribution with such information;
The information processing apparatus according to (2) or (3).
(5)
The acquisition unit
Acquiring information related to the self-position of the arbitrary mobile body based on sensing information obtained by an internal sensor mounted on the arbitrary mobile body;
The information processing apparatus according to any one of (1) to (4).
(6)
The acquisition unit
Acquiring position information of the arbitrary mobile body estimated based on sensing information obtained by an external sensor mounted on the other mobile body;
The information processing device according to (5) above.
(7)
The acquisition unit
The arbitrary mobile body estimated based on the sensing information obtained by the external sensor mounted on the other mobile body when the arbitrary mobile body and the other mobile body are within a predetermined distance. to get the location of
The information processing device according to (6) above.
(8)
a correction unit that corrects the position information of the arbitrary moving body to the position information estimated by the estimation unit;
The information processing apparatus according to any one of (1) to (7), further comprising:
(9)
The acquisition unit
Acquiring position information of the arbitrary mobile body obtained by detecting a feature installed in a predetermined area by an external sensor mounted on the arbitrary mobile body,
The correction unit is
correcting the position information of the arbitrary moving body to the position information of the arbitrary moving body obtained by the external sensor detecting the feature;
The information processing device according to (8) above.
(10)
The correction unit is
continuously correcting the position information of the arbitrary moving object;
The information processing device according to (9) above.
(11)
The correction unit is
continuously correcting the position information of the arbitrary moving body according to the movement distance of the arbitrary moving body;
The information processing device according to (10) above.
(12)
The estimation unit
estimating position information when the position information of the arbitrary moving body is not corrected and position information when the position information of the arbitrary moving body is corrected when the arbitrary moving body moves;
The correction unit is
the arbitrary moving object based on the position information when the position information of the arbitrary moving object estimated by the estimation unit is not corrected and the position information when the position information of the arbitrary moving object is corrected; to correct the location information of
The information processing device according to (11) above.
(13)
The correction unit is
continuously correcting the position information of the arbitrary moving body until the arbitrary moving body reaches the area where the next feature is installed from the area where the feature is installed;
The information processing device according to (12) above.
(14)
The plurality of mobile bodies have wheels, and self-positions are always estimated using regenerative energy generated by the rotation of the wheels.
The information processing apparatus according to any one of (1) to (13) above.
(15)
Acquiring a plurality of positional information of an arbitrary moving body based on sensing information obtained by sensors mounted on each of a plurality of moving bodies;
estimating the location information of the arbitrary moving object by superimposing the obtained probability distributions of the plurality of location information of the arbitrary moving object;
A computer-implemented information processing method comprising:
(16)
to the computer,
an acquisition function for acquiring a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of a plurality of moving objects;
an estimating function for estimating the location information of the arbitrary moving object by superimposing the probability distributions of the plurality of location information of the arbitrary moving object acquired by the acquiring function;
A program that realizes
1  ネットワーク
10  ロボット
110  カメラ
120  IMU
130  車輪エンコーダ
140  記憶部
150  通信部
160  制御部
 161  推定部
20  サーバ
210  通信部
220  制御部
 221  推定部
 225  補正部
1 network 10 robot 110 camera 120 IMU
130 wheel encoder 140 storage unit 150 communication unit 160 control unit 161 estimation unit 20 server 210 communication unit 220 control unit 221 estimation unit 225 correction unit

Claims (16)

  1.  複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得する取得部と、
     前記取得部により取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する推定部と、
    を備える、情報処理装置。
    an acquisition unit that acquires a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of the plurality of moving objects;
    an estimating unit that estimates the position information of the arbitrary moving object by superimposing the probability distributions of the plurality of position information of the arbitrary moving object acquired by the acquiring unit;
    An information processing device.
  2.  前記任意の移動体は、前記複数の移動体に含まれ、
     前記取得部は、
     前記任意の移動体の自己位置に係る情報と、前記複数の移動体のうち前記任意の移動体と異なる他の移動体により推定された前記任意の移動体の位置情報を取得する、
    請求項1に記載の情報処理装置。
    the arbitrary moving body is included in the plurality of moving bodies,
    The acquisition unit
    Acquiring information related to the self-position of the arbitrary mobile body and position information of the arbitrary mobile body estimated by another mobile body different from the arbitrary mobile body among the plurality of mobile bodies,
    The information processing device according to claim 1 .
  3.  前記取得部は、
     前記他の移動体が前記任意の移動体の位置情報を取得した時刻に関する情報を取得する、
    請求項2に記載の情報処理装置。
    The acquisition unit
    Acquiring information about the time when the other mobile body acquired the position information of the arbitrary mobile body;
    The information processing apparatus according to claim 2.
  4.  前記推定部は、
     前記他の移動体により推定された前記任意の移動体の位置情報と、前記他の移動体が前記任意の移動体の位置情報を取得した時刻において取得された前記任意の移動体の自己位置に係る情報との確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する、
    請求項3に記載の情報処理装置。
    The estimation unit
    The position information of the arbitrary mobile body estimated by the other mobile body and the self-position of the arbitrary mobile body acquired at the time when the other mobile body acquired the position information of the arbitrary mobile body estimating the position information of the arbitrary moving object by superimposing the probability distribution with such information;
    The information processing apparatus according to claim 3.
  5.  前記取得部は、
     前記任意の移動体に搭載される内界センサにより得られたセンシング情報に基づく前記任意の移動体の自己位置に係る情報を取得する、
    請求項4に記載の情報処理装置。
    The acquisition unit
    Acquiring information related to the self-position of the arbitrary mobile body based on sensing information obtained by an internal sensor mounted on the arbitrary mobile body;
    The information processing apparatus according to claim 4.
  6.  前記取得部は、
     前記他の移動体に搭載される外界センサにより得られたセンシング情報に基づき推定された前記任意の移動体の位置情報を取得する、
    請求項5に記載の情報処理装置。
    The acquisition unit
    Acquiring position information of the arbitrary mobile body estimated based on sensing information obtained by an external sensor mounted on the other mobile body;
    The information processing device according to claim 5 .
  7.  前記取得部は、
     前記任意の移動体と、前記他の移動体が所定の距離内になった際に前記他の移動体に搭載される前記外界センサにより得られたセンシング情報に基づき推定された前記任意の移動体の位置情報を取得する、
    請求項6に記載の情報処理装置。
    The acquisition unit
    The arbitrary mobile body estimated based on the sensing information obtained by the external sensor mounted on the other mobile body when the arbitrary mobile body and the other mobile body are within a predetermined distance. to get the location of
    The information processing device according to claim 6 .
  8.  前記任意の移動体の位置情報を前記推定部により推定された位置情報に補正する補正部、
    を更に備える、
    請求項7に記載の情報処理装置。
    a correction unit that corrects the position information of the arbitrary moving body to the position information estimated by the estimation unit;
    further comprising
    The information processing apparatus according to claim 7.
  9.  前記取得部は、
     前記任意の移動体に搭載される外界センサが予め定められたエリアに設置された地物を検出することで得られた前記任意の移動体の位置情報を取得し、
     前記補正部は、
     前記任意の移動体の位置情報を前記外界センサが前記地物を検出することで得られた前記任意の移動体の位置情報に補正する、 
    請求項8に記載の情報処理装置。
    The acquisition unit
    Acquiring position information of the arbitrary mobile body obtained by detecting a feature installed in a predetermined area by an external sensor mounted on the arbitrary mobile body,
    The correction unit is
    correcting the position information of the arbitrary moving body to the position information of the arbitrary moving body obtained by the external sensor detecting the feature;
    The information processing apparatus according to claim 8 .
  10.  前記補正部は、
     前記任意の移動体の位置情報を連続的に補正する、
    請求項9に記載の情報処理装置。
    The correction unit is
    continuously correcting the position information of the arbitrary moving object;
    The information processing apparatus according to claim 9 .
  11.  前記補正部は、
     前記任意の移動体の位置情報を前記任意の移動体の移動距離に応じて連続的に補正する、
    請求項10に記載の情報処理装置。
    The correction unit is
    continuously correcting the position information of the arbitrary moving body according to the movement distance of the arbitrary moving body;
    The information processing apparatus according to claim 10.
  12.  前記推定部は、
     前記任意の移動体の移動時において、前記任意の移動体の位置情報を補正しなかった場合の位置情報と、前記任意の移動体の位置情報を補正した場合の位置情報とを推定し、
     前記補正部は、
     前記推定部により推定された前記任意の移動体の位置情報を補正しなかった場合の位置情報と、前記任意の移動体の位置情報を補正した場合の位置情報とに基づき、前記任意の移動体の位置情報を補正する、
    請求項11に記載の情報処理装置。
    The estimation unit
    estimating position information when the position information of the arbitrary moving body is not corrected and position information when the position information of the arbitrary moving body is corrected when the arbitrary moving body moves;
    The correction unit is
    the arbitrary moving object based on the position information when the position information of the arbitrary moving object estimated by the estimation unit is not corrected and the position information when the position information of the arbitrary moving object is corrected; to correct the location information of
    The information processing device according to claim 11 .
  13.  前記補正部は、
     前記地物が設置されたエリアから次の地物が設置されるエリアに前記任意の移動体が到達するまでに、前記任意の移動体の位置情報を連続的に補正する、
    請求項12に記載の情報処理装置。
    The correction unit is
    Continuously correcting the position information of the arbitrary moving body until the arbitrary moving body reaches the area where the next feature is installed from the area where the feature is installed;
    The information processing apparatus according to claim 12.
  14.  前記複数の移動体は、車輪を有し、前記車輪の回転による回生エネルギーを用いて自己位置が常時推定される、
    請求項13に記載の情報処理装置。
    The plurality of mobile bodies have wheels, and self-positions are constantly estimated using regenerative energy generated by the rotation of the wheels.
    The information processing apparatus according to claim 13.
  15.  複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得することと、
     取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定することと、
    を含む、コンピュータにより実行される情報処理方法。
    Acquiring a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of the plurality of moving objects;
    estimating the location information of the arbitrary moving object by superimposing the obtained probability distributions of the plurality of location information of the arbitrary moving object;
    A computer-implemented information processing method comprising:
  16.  コンピュータに、
     複数の移動体の各々に搭載されるセンサにより得られたセンシング情報に基づく任意の移動体の複数の位置情報を取得する取得機能と、
     前記取得機能により取得された前記任意の移動体の複数の位置情報の確率分布を重ね合わせることにより、前記任意の移動体の位置情報を推定する推定機能と、
    を実現させる、プログラム。
    to the computer,
    an acquisition function for acquiring a plurality of pieces of position information of an arbitrary moving object based on sensing information obtained by sensors mounted on each of the plurality of moving objects;
    an estimating function for estimating the location information of the arbitrary moving object by superimposing the probability distributions of the plurality of location information of the arbitrary moving object acquired by the acquiring function;
    A program that realizes
PCT/JP2022/034884 2021-11-02 2022-09-20 Information processing device, information processing method, and program WO2023079845A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-179385 2021-11-02
JP2021179385 2021-11-02

Publications (1)

Publication Number Publication Date
WO2023079845A1 true WO2023079845A1 (en) 2023-05-11

Family

ID=86241281

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/034884 WO2023079845A1 (en) 2021-11-02 2022-09-20 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2023079845A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005094858A (en) * 2003-09-12 2005-04-07 Sony Corp Travelling apparatus and its control method
JP2011002324A (en) * 2009-06-18 2011-01-06 Clarion Co Ltd Device and program for detecting position
JP2018155731A (en) * 2017-03-16 2018-10-04 株式会社デンソー Self-position estimation device
WO2018212294A1 (en) * 2017-05-19 2018-11-22 パイオニア株式会社 Self-position estimation device, control method, program, and storage medium
JP2021008258A (en) * 2019-07-01 2021-01-28 富士通株式会社 Smart object knowledge sharing
WO2021049227A1 (en) * 2019-09-13 2021-03-18 ソニー株式会社 Information processing system, information processing device, and information processing program
JP2021149229A (en) * 2020-03-17 2021-09-27 村田機械株式会社 Mobile body and location estimation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005094858A (en) * 2003-09-12 2005-04-07 Sony Corp Travelling apparatus and its control method
JP2011002324A (en) * 2009-06-18 2011-01-06 Clarion Co Ltd Device and program for detecting position
JP2018155731A (en) * 2017-03-16 2018-10-04 株式会社デンソー Self-position estimation device
WO2018212294A1 (en) * 2017-05-19 2018-11-22 パイオニア株式会社 Self-position estimation device, control method, program, and storage medium
JP2021008258A (en) * 2019-07-01 2021-01-28 富士通株式会社 Smart object knowledge sharing
WO2021049227A1 (en) * 2019-09-13 2021-03-18 ソニー株式会社 Information processing system, information processing device, and information processing program
JP2021149229A (en) * 2020-03-17 2021-09-27 村田機械株式会社 Mobile body and location estimation method

Similar Documents

Publication Publication Date Title
US11016116B2 (en) Correction of accumulated errors in inertial measurement units attached to a user
US20190187784A1 (en) Calibration of Inertial Measurement Units Attached to Arms of a User and to a Head Mounted Device
CN109885080B (en) Autonomous control system and autonomous control method
CN110806215B (en) Vehicle positioning method, device, equipment and storage medium
US20200033937A1 (en) Calibration of Measurement Units in Alignment with a Skeleton Model to Control a Computer System
US20180335834A1 (en) Tracking torso orientation to generate inputs for computer systems
JP6907525B2 (en) Indoor position detection and navigation system for moving objects, indoor position detection and navigation methods, and indoor position detection and navigation programs
CN112527102A (en) Head-mounted all-in-one machine system and 6DoF tracking method and device thereof
JP2008006519A (en) Robot device and method for controlling robot device
CN108885343B (en) System and method for correcting vehicle induced directional changes
CN112580582B (en) Action learning method, action learning device, action learning medium and electronic equipment
WO2014039309A1 (en) Robot control based on vision tracking of a remote mobile device having a camera
JP2019078560A (en) Gyro sensor offset correcting device, offset correction program, and pedestrian autonomous navigation device
US20220253131A1 (en) Systems and methods for object tracking using fused data
JP2023509291A (en) Joint infrared and visible light visual inertial object tracking
WO2018216342A1 (en) Information processing apparatus, information processing method, and program
US20200334837A1 (en) Method for predicting a motion of an object, method for calibrating a motion model, method for deriving a predefined quantity and method for generating a virtual reality view
WO2023079845A1 (en) Information processing device, information processing method, and program
CN116922387B (en) Real-time control method and system for photographic robot
US11926038B2 (en) Information processing apparatus and information processing method
JP2013217793A (en) Off-set calculation device, off-set calculation method, program, and information processing device
Fattah et al. Dynamic map generating rescuer offering surveillance robotic system with autonomous path feedback capability
KR20220037212A (en) Robust stereo visual inertial navigation apparatus and method
JP2020160594A (en) Self-position estimating method
WO2021256310A1 (en) Information processing device, terminal device, information processing system, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22889669

Country of ref document: EP

Kind code of ref document: A1