WO2023149376A1 - Control system, control method, and storage medium - Google Patents

Control system, control method, and storage medium Download PDF

Info

Publication number
WO2023149376A1
WO2023149376A1 PCT/JP2023/002676 JP2023002676W WO2023149376A1 WO 2023149376 A1 WO2023149376 A1 WO 2023149376A1 JP 2023002676 W JP2023002676 W JP 2023002676W WO 2023149376 A1 WO2023149376 A1 WO 2023149376A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
space
autonomous mobile
mobile body
route
Prior art date
Application number
PCT/JP2023/002676
Other languages
French (fr)
Japanese (ja)
Inventor
翼 仲谷
洋平 佐藤
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2023002447A external-priority patent/JP2023112670A/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2023149376A1 publication Critical patent/WO2023149376A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/909Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles

Definitions

  • the present invention relates to a control system, a control method, a storage medium, etc. related to a three-dimensional space.
  • a single processor divides a spatio-temporal area in time and space according to spatio-temporal management data provided by a user to generate a plurality of spatio-temporal divided areas. Also, in consideration of the temporal and spatial proximity of the spatio-temporal segments, an identifier expressed by a one-dimensional integer value is assigned to uniquely identify each of the plurality of spatio-temporal segments.
  • a spatio-temporal data management system determines the arrangement of time-series data so that data in spatio-temporal divided areas with similar identifiers are arranged closely on the storage device.
  • Patent Document 1 it is only within the processor that generated the data that the data regarding the generated area can be grasped by the identifier. Therefore, users of different systems cannot utilize the information of the spatial division area. In addition, in order for the system user to use the information of the spatio-temporal segmented area, means for accurately detecting his/her own position was required.
  • the present invention provides a control system that can stably detect its own position.
  • a control system includes: A formatting means for assigning a unique identifier to a three-dimensional space defined by a predetermined coordinate system, and for formatting and storing spatial information relating to the state and time of an object existing in the space in association with the unique identifier. , The formatting means registers key frame information as the spatial information.
  • FIG. 1 is a diagram showing an overall configuration example of an autonomous mobile body control system according to a first embodiment
  • FIG. (A) is a diagram showing an example of an input screen when a user inputs position information
  • (B) is a diagram showing an example of a selection screen for selecting an autonomous mobile body to be used.
  • (A) is a diagram showing an example of a screen for confirming the current position of an autonomous mobile body
  • (B) is a diagram showing an example of a map display screen when confirming the current position of an autonomous mobile body.
  • 2 is a functional block diagram showing an internal configuration example of 10 to 15 in FIG. 1;
  • FIG. 1 is a diagram showing an overall configuration example of an autonomous mobile body control system according to a first embodiment
  • FIG. (A) is a diagram showing an example of an input screen when a user inputs position information
  • (B) is a diagram showing an example of a selection screen for selecting an autonomous mobile body to be used.
  • (A) is a diagram showing an example of a screen
  • FIG. 1 is a perspective view showing a mechanical configuration example of an autonomous mobile body 12 according to Embodiment 1.
  • FIG. 3 is a block diagram showing a specific hardware configuration example of a control unit 10-2, a control unit 11-2, a control unit 12-2, a control unit 13-2, a control unit 14-3, and a control unit 15-2;
  • FIG. 4 is a sequence diagram illustrating processing executed by the autonomous mobile body control system according to the first embodiment;
  • FIG. 9 is a sequence diagram continued from FIG. 8;
  • FIG. 10 is a sequence diagram continued from FIG. 9;
  • (A) is a diagram showing latitude/longitude information of the earth, and
  • (B) is a perspective view showing the predetermined space 100 of (A).
  • 4 is a diagram schematically showing spatial information in space 100.
  • FIG. (A) is a diagram showing route information using map information
  • (B) is a diagram showing route information using position point cloud data using map information
  • (C) is a map showing route information using unique identifiers. It is the displayed figure.
  • FIG. 11 is a functional block diagram related to key frame information storage processing according to the second embodiment;
  • FIG. 12 is a sequence diagram for explaining key frame information storage processing according to the second embodiment;
  • FIG. 10 is an image diagram of format route information 900 of an autonomous mobile body 12 according to Embodiment 2;
  • FIG. 17 is a sequence diagram illustrating the operation of an autonomous mobile body 12 that moves using the format route information 900 of FIG. 16 and the system control device 10 that controls the autonomous mobile body 12;
  • the mobile body may be one in which the user can operate at least a part of the movement of the mobile body. That is, for example, various displays related to the moving route and the like may be displayed to the user, and the user may perform a part of the driving operation of the moving body with reference to the display.
  • FIG. 1 is a diagram showing an overall configuration example of an autonomous mobile body control system according to Embodiment 1 of the present invention.
  • the autonomous mobile body control system also abbreviated as control system
  • the user interface 11 means a user terminal device.
  • each device shown in FIG. 1 is connected via the Internet 16 by respective network connection units, which will be described later.
  • network connection units such as LAN (Local Area Network) may be used.
  • part of the system control device 10, the user interface 11, the route determining device 13, the conversion information holding device 14, etc. may be configured as the same device.
  • the system control device 10, the user interface 11, the autonomous mobile body 12, the route determination device 13, the conversion information holding device 14, and the sensor node 15 each contain information such as a CPU as a computer and ROM, RAM, HDD, etc. as storage media. Contains processing equipment. Details of the function and internal configuration of each device will be described later.
  • screen images displayed on the user interface 11 when the user browses the current position of the autonomous mobile body 12 will be described with reference to FIGS. 3(A) and 3(B). Also, how the user operates the application in the autonomous mobile body control system will be explained using an example.
  • the map display will be described on a two-dimensional plane for the sake of convenience. You can also enter information. That is, in this embodiment, the application can display a 3D map.
  • Fig. 2(A) is a diagram showing an example of an input screen when a user inputs position information
  • Fig. 2(B) is a diagram showing an example of a selection screen for selecting an autonomous mobile body to be used.
  • the WEB page of the system control device 10 is displayed.
  • the input screen 40 has a list display button 48 for displaying a list of autonomous mobile bodies to be used (for example, mobilities capable of automatic operation).
  • a mobility list display screen 47 is displayed as shown in FIG. 2(B).
  • the user first selects the autonomous mobile body to be used (for example, mobility capable of automatic operation) on the list display screen 47 .
  • the autonomous mobile body to be used for example, mobility capable of automatic operation
  • the list display screen 47 for example, mobilities M1 to M3 are displayed in a selectable manner, but the number is not limited to this.
  • the screen automatically returns to the input screen 40 of FIG. 2(A). Also, the selected mobility name is displayed on the list display button 48 . After that, the user inputs the location to be set as the starting point in the input field 41 of "starting point".
  • the user inputs the location to be set as a transit point in the input field 42 of "transit point 1". It is possible to add a waypoint, and when the add waypoint button 44 is pressed once, an input field 46 for "waypoint 2" is additionally displayed, and the waypoint to be added can be input. .
  • additional input fields 46 are displayed, such as "waypoint 3" and "waypoint 4", and a plurality of additional waypoints can be input. can. Also, the user inputs a place to be set as the arrival point in the input field 43 of "arrival point". Although not shown in the figure, when the input fields 41 to 43, 46, etc. are clicked, a keyboard or the like for inputting characters is temporarily displayed.
  • the user can set the movement route of the autonomous mobile body 12 by pressing the decision button 45 .
  • "AAA” is set as the departure point
  • "BBB” is set as the transit point 1
  • "CCC” is set as the arrival point.
  • the text to be entered in the input field may be, for example, an address, latitude information and longitude information (hereinafter also referred to as latitude/longitude information), store name, telephone number, etc., to indicate a specific location. You may enable it to input information.
  • FIG. 3A is a diagram showing an example of a screen for confirming the current position of an autonomous mobile body
  • FIG. 3B is a diagram showing an example of a map display screen when confirming the current position of an autonomous mobile body.
  • Reference numeral 50 in FIG. 3(A) denotes a confirmation screen, which is displayed when the user operates an operation button (not shown) after setting the movement route of the autonomous mobile body 12 on the screen as shown in FIG. 2(A). be.
  • the current position of the autonomous mobile body 12 is displayed on the WEB page of the user interface 11, like the current position 56, for example. Therefore, the user can easily grasp the current position.
  • the user can update the screen display information to display the latest state. Also, the user can change the place of departure, the waypoint, or the place of arrival by pressing the change waypoint/arrival place button 54 . That is, the user can change the location by inputting the location to be reset in the input field 51 of "departure point", the input field 52 of "route point 1", and the input field 53 of "arrival point".
  • FIG. 3(B) shows an example of a map display screen 60 that switches from the confirmation screen 50 when the map display button 55 of FIG. 3(A) is pressed.
  • the current location of the autonomous mobile body 12 can be confirmed more easily by displaying the current location 62 on the map.
  • the return button 61 the display screen can be returned to the confirmation screen 50 of FIG. 3(A).
  • the user can easily set a movement route for moving the autonomous mobile body 12 from a predetermined location to a predetermined location.
  • a route setting application can also be applied to, for example, a taxi dispatch service, a drone home delivery service, and the like.
  • FIG. 4 is a functional block diagram showing an internal configuration example of 10 to 15 in FIG. Some of the functional blocks shown in FIG. 4 are realized by causing a computer (not shown) included in each device to execute a computer program stored in a memory (not shown) as a storage medium.
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • each functional block shown in FIG. 4 may not be built in the same housing, and may be configured by separate devices connected to each other via signal paths.
  • the user interface 11 includes an operation unit 11-1, a control unit 11-2, a display unit 11-3, an information storage unit (memory/HDD) 11-4, and a network connection unit 11-5.
  • the user interface 11 is, for example, an information processing device such as a smart phone, a tablet terminal, or a smart watch.
  • the operation unit 11-1 is composed of a touch panel, key buttons, etc., and is used for data input.
  • the display unit 11-3 is, for example, a liquid crystal screen, and is used to display route information and other data.
  • the display screen of the user interface 11 shown in FIGS. 2 and 3 is displayed on the display unit 11-3.
  • the user can select a moving route, input information, confirm information, and the like.
  • the operation unit 11-1 and the display unit 11-3 provide an operation interface for the user to actually operate.
  • a touch panel may be used as both the operation section and the display section.
  • the control unit 11-2 incorporates a CPU as a computer, manages various applications in the user interface 11, manages modes such as information input and information confirmation, and controls communication processing. Also, it controls the processing in each part in the system controller.
  • the information storage unit (memory/HDD) 11-4 is a recording medium for holding necessary information such as computer programs to be executed by the CPU.
  • a network connection unit 11-5 controls communication performed via the Internet, LAN, wireless LAN, or the like.
  • the user interface 11 of the present embodiment displays the departure point, waypoint, and arrival point input screen 40 on the browser screen of the system control device 10 .
  • the user interface 11 can display the current position of the autonomous mobile body 12 by displaying the confirmation screen 50 and the map display screen 60 on the browser screen.
  • the route determination device 13 includes a map information management unit 13-1, a control unit 13-2, a position/route information management unit 13-3, an information storage unit (memory/HDD) 13-4, and a network connection unit 13. -5.
  • the map information management unit 13-1 holds wide-area map information, searches for route information indicating a route on the map based on designated predetermined position information, and uses the route information of the search result as a position/ It is transmitted to the route information management section 13-3.
  • the map information is three-dimensional spatial map information including information such as topography and latitude/longitude/altitude.
  • the map information also includes roadways, sidewalks, direction of travel, and regulation information related to road traffic laws such as traffic regulations.
  • the map information also includes time-varying traffic regulation information, such as one-way streets depending on the time of day and pedestrian-only roads depending on the time of day.
  • the control unit 13-2 incorporates a CPU as a computer, and controls processing in each unit within the route determination device 13. FIG.
  • the position/route information management unit 13-3 manages the position information of the autonomous mobile body acquired via the network connection unit 13-5, and transmits the position information to the map information management unit 13-1. Manage the route information as the search result obtained from 13-1.
  • the control unit 13-2 converts the route information managed by the position/route information management unit 13-3 into a predetermined data format according to a request from the external system, and transmits the converted data to the external system.
  • the route determination device 13 is configured to search for a route in compliance with the Road Traffic Law or the like based on designated position information, and to output the route information in a predetermined data format. It is
  • the conversion information holding device 14 in FIG. -5 and a network connection unit 14-6 The conversion information holding device 14 in FIG. -5 and a network connection unit 14-6.
  • the conversion information holding device 14 assigns a unique identifier to a three-dimensional space defined by latitude/longitude/height, and associates spatial information about the state and time of objects existing in the space with the unique identifier. It can function as a formatting means to format and save.
  • the position/route information management unit 14-1 manages predetermined position information acquired through the network connection unit 14-6, and transmits the position information to the control unit 14-3 in accordance with the request of the control unit 14-3.
  • the control unit 14-3 incorporates a CPU as a computer, and controls processing in each unit within the conversion information holding device 14. FIG.
  • the control unit 14-3 Based on the position information acquired from the position/route information management unit 14-1 and the format information managed by the format database 14-4, the control unit 14-3 converts the position information into a unique identifier defined by the format. Convert to Then, it is transmitted to the unique identifier management section 14-2.
  • an identifier (hereinafter referred to as a unique identifier) is assigned to the space starting from a predetermined position, and the space is managed by the unique identifier.
  • a unique identifier is assigned to the space starting from a predetermined position, and the space is managed by the unique identifier.
  • the unique identifier management unit 14-2 manages the unique identifier converted by the control unit 14-3 and transmits it through the network connection unit 14-6.
  • the format database 14-4 manages format information and transmits the format information to the control unit 14-3 in accordance with a request from the control unit 14-3.
  • the conversion information holding device 14 manages information related to space acquired by external devices, devices, and networks in association with unique identifiers. In addition, it provides information on unique identifiers and associated spaces to external devices, devices, and networks.
  • the conversion information holding device 14 acquires the unique identifier and the information in the space based on the predetermined position information, and can share the information with external devices, devices, and networks connected to itself. managed and provided to Further, the conversion information holding device 14 converts the position information specified by the system control device 10 into a unique identifier and provides the system control device 10 with the unique identifier.
  • the system control device 10 includes a unique identifier management section 10-1, a control section 10-2, a position/path information management section 10-3, an information storage section (memory/HDD) 10-4, and a network connection section 10-. 5.
  • the position/route information management unit 10-3 holds simple map information that associates terrain information with latitude/longitude information, and stores predetermined position information and route information obtained through the network connection unit 10-5. to manage.
  • the position/route information management unit 10-3 can also divide the route information at predetermined intervals and generate position information such as the latitude/longitude of the divided locations.
  • the unique identifier management unit 10-1 manages information obtained by converting position information and route information into unique identifiers.
  • the control unit 10-2 has a CPU as a computer, controls the position information, route information, and unique identifier communication functions of the system control device 10, and controls processing in each component of the system control device 10.
  • control unit 10 - 2 provides the user interface 11 with the WEB page and transmits predetermined position information acquired from the WEB page to the route determination device 13 . Further, it acquires predetermined route information from the route determination device 13 and transmits each position information of the route information to the conversion information holding device 14 . Then, the route information converted into the unique identifier acquired from the conversion information holding device 14 is transmitted to the autonomous mobile body 12 .
  • the system control device 10 is configured to acquire predetermined position information designated by the user, transmit and receive position information and route information, generate position information, and transmit and receive route information using unique identifiers.
  • the system control device 10 collects route information necessary for the autonomous mobile body 12 to move autonomously, and uses a unique identifier for the autonomous mobile body 12. provide route information.
  • the system control device 10, the route determination device 13, and the conversion information holding device 14 function as servers, for example.
  • the autonomous moving body 12 includes a detection unit 12-1, a control unit 12-2, a direction control unit 12-3, an information storage unit (memory/HDD) 12-4, a network connection unit 12-5, and a drive unit 12. -6.
  • the detection unit 12-1 has, for example, a plurality of imaging elements, and has a function of performing distance measurement based on phase differences between a plurality of imaging signals obtained from the plurality of imaging elements.
  • detection information such as obstacles such as surrounding terrain and building walls
  • the detection unit 12-1 also has a self-position detection function such as GPS (Global Positioning System) and a direction detection function such as a geomagnetic sensor. Furthermore, based on the acquired detection information, self-position estimation information, and direction detection information, the control unit 12-2 can generate a three-dimensional map of cyberspace.
  • a self-position detection function such as GPS (Global Positioning System)
  • a direction detection function such as a geomagnetic sensor.
  • the control unit 12-2 can generate a three-dimensional map of cyberspace.
  • a 3D map of cyberspace is one that can express spatial information equivalent to the position of features in the real world as digital data.
  • the autonomous mobile body 12 that exists in the real world and information on features around it are held as spatially equivalent information as digital data. Therefore, by using this digital data, efficient movement is possible.
  • FIG. 5A is a diagram showing the spatial positional relationship between the autonomous mobile body 12 in the real world and a pillar 99 that exists as feature information around it.
  • FIG. 5B shows the autonomous mobile body 12 and the pillar 99.
  • FIG. 5B is a diagram showing a state of mapping in an arbitrary XYZ coordinate system space with the position P0 as the origin.
  • the position of the autonomous mobile body 12 is determined from the latitude and longitude position information acquired by GPS or the like (not shown) mounted on the autonomous mobile body 12. identified as ⁇ 0. Also, the orientation of the autonomous mobile body 12 is specified by the difference between the orientation ⁇ Y acquired by an electronic compass (not shown) or the like and the moving direction 12Y of the autonomous mobile body 12 .
  • the position of the pillar 99 is specified as the position of the vertex 99-1 from position information measured in advance.
  • the distance measurement function of the autonomous mobile body 12 makes it possible to acquire the distance from ⁇ 0 of the autonomous mobile body 12 to the vertex 99-1.
  • FIG. 5A when the moving direction 12Y is the axis of the XYZ coordinate system and ⁇ 0 is the origin, the coordinates (Wx, Wy, Wz) of the vertex 99-1 are shown.
  • FIG. 5B shows a state in which the autonomous mobile body 12 and the pillar 99 are mapped in an arbitrary XYZ coordinate system space with P0 as the origin.
  • the autonomous mobile body 12 is expressed as P1 and the pillar 99 as P2 in this arbitrary XYZ coordinate system space. be able to.
  • the position P1 of ⁇ 0 in this space can be calculated from the latitude and longitude of ⁇ 0 and the latitude and longitude of P0.
  • the column 99 can be calculated as P2.
  • two of the autonomous mobile body 12 and the pillar 99 are represented by a three-dimensional map of cyber space, but of course, even if there are more, it is possible to treat them in the same way.
  • a three-dimensional map is a mapping of the self-position and objects in the real world in a three-dimensional space.
  • the autonomous mobile body 12 stores learning result data of object detection that has been machine-learned, for example, in an information storage unit (memory/HDD) 12-4. Objects can be detected. The detection information can also be acquired from an external system via the network connection unit 12-5 and reflected on the three-dimensional map.
  • an information storage unit memory/HDD
  • the control unit 12-2 has a CPU as a computer, controls movement, direction change, and autonomous running functions of the autonomous mobile body 12, and controls processing in each component of the autonomous mobile body 12.
  • the direction control unit 12-3 changes the moving direction of the autonomous mobile body 12 by changing the driving direction of the driving unit 12-6.
  • the driving unit 12-6 is composed of a driving device such as a motor, and generates a propulsion force for the autonomous mobile body 12.
  • the autonomous mobile body 12 reflects its own position, detection information, and object detection information in a three-dimensional map, generates a route that maintains a certain distance from the surrounding terrain, buildings, obstacles, and objects, and performs autonomous driving. can be done.
  • the route determination device 13 generates routes in consideration of, for example, regulatory information related to the Road Traffic Act.
  • the autonomous mobile body 12 more accurately detects the positions of surrounding obstacles on the route determined by the route determination device 13, and generates a route based on its own size so as to move without touching them.
  • the information storage unit (memory/HDD) 12-4 of the autonomous mobile body 12 can store the mobility type of the autonomous mobile body itself.
  • the type of mobility is the type of mobile object, such as automobiles, bicycles, and drones. Formatted route information, which will be described later, can be generated based on this mobility format.
  • FIG. 6 is a perspective view showing a mechanical configuration example of the autonomous mobile body 12 according to the first embodiment.
  • the autonomous mobile body 12 will be described as an example of a traveling body having wheels, but is not limited to this, and may be a flying body such as a drone.
  • the autonomous moving body 12 includes a detection unit 12-1, a control unit 12-2, a direction control unit 12-3, an information storage unit (memory/HDD) 12-4, a network connection unit 12-5, a drive unit 12-6 are mounted and each component is electrically connected to each other. At least two drive units 12-6 and direction control units 12-3 are provided in the autonomous mobile body 12.
  • FIG. 6 the autonomous moving body 12 includes a detection unit 12-1, a control unit 12-2, a direction control unit 12-3, an information storage unit (memory/HDD) 12-4, a network connection unit 12-5, a drive unit 12-6 are mounted and each component is electrically connected to each other. At least two drive units 12-6 and direction control units 12-3 are provided in the autonomous mobile body 12.
  • the direction control unit 12-3 changes the moving direction of the autonomous mobile body 12 by changing the direction of the driving unit 12-6 by rotating the shaft, and the driving unit 12-6 rotates the autonomous mobile body by rotating the shaft. Perform 12 forwards and backwards.
  • the configuration described with reference to FIG. 6 is an example, and the present invention is not limited to this.
  • an omniwheel or the like may be used to change the movement direction.
  • the autonomous mobile body 12 is, for example, a mobile body using SLAM (Simultaneous Localization and Mapping) technology. Further, based on the detection information detected by the detection unit 12-1 or the like and the detection information of the external system acquired via the Internet 16, it is configured so that it can autonomously move along a designated predetermined route.
  • SLAM Simultaneous Localization and Mapping
  • the autonomous mobile body 12 can perform trace movement by tracing finely specified points, and can also generate route information by itself in the space between them while passing through roughly set points and move. It is possible. As described above, the autonomous mobile body 12 of this embodiment can autonomously move based on the route information using the unique identifier provided by the system control device 10 .
  • the sensor node 15 is an external system such as a video surveillance system such as a roadside camera unit. , and a network connection unit 15-4.
  • the detection unit 15-1 is an imaging unit composed of, for example, a camera, and acquires detection information of an area in which the detection unit 15-1 can detect itself, and has an object detection function and a distance measurement function.
  • the control unit 15-2 incorporates a CPU as a computer, controls the detection of the sensor node 15, data storage, and data transmission functions, and controls processing in each unit within the sensor node 15. Further, the detection information acquired by the detection unit 15-1 is stored in the information storage unit (memory/HDD) 15-3 and transmitted to the conversion information holding device 14 through the network connection unit 15-4.
  • the sensor node 15 is configured so that detection information such as image information detected by the detection unit 15-1, feature point information of a detected object, and position information can be stored in the information storage unit 15-3 and communicated. It is Also, the sensor node 15 provides the conversion information holding device 14 with detection information of the area detectable by itself.
  • FIG. 7 is a block diagram showing a specific hardware configuration example of the control unit 10-2, the control unit 11-2, the control unit 12-2, the control unit 13-2, the control unit 14-3, and the control unit 15-2. It is a diagram. Note that the hardware configuration is not limited to that shown in FIG. Moreover, it is not necessary to have all the blocks shown in FIG.
  • 21 is a CPU as a computer that manages the calculation and control of the information processing device.
  • the RAM 22 is a recording medium that functions as a main memory of the CPU 21, an execution program area, an execution area for the program, and a data area.
  • the ROM 23 is a recording medium in which an operation processing procedure (program) of the CPU 21 is recorded.
  • the ROM 23 includes a program ROM that records basic software (OS), which is a system program for controlling the information processing device, and a data ROM that records information necessary for operating the system. Note that an HDD 29, which will be described later, may be used instead of the ROM 23.
  • OS basic software
  • HDD 29 which will be described later, may be used instead of the ROM 23.
  • the network I/F 24 is a network interface (NETIF) that controls data transfer between information processing devices via the Internet 16 and diagnoses the connection status.
  • a video RAM (VRAM) 25 develops an image to be displayed on the screen of the LCD 26 and controls the display.
  • the LCD 26 is a display device such as a display (hereinafter referred to as LCD).
  • the controller 27 is a controller (hereinafter referred to as KBC) for controlling input signals from the external input device 28 .
  • the external input device 28 is an external input device (hereinafter referred to as KB) for receiving operations performed by the user, and a pointing device such as a keyboard or mouse is used, for example.
  • the HDD 29 is a hard disk drive (hereinafter referred to as HDD) and is used for storing application programs and various data.
  • the application program in this embodiment is a software program or the like that executes various processing functions in this embodiment.
  • the CDD 30 is an external input/output device (hereinafter referred to as CDD). For example, it is for inputting/outputting data from/to a removable medium 31 as a removable data recording medium such as a CDROM drive, a DVD drive, a Blu-Ray (registered trademark) disk drive, and the like.
  • a removable medium 31 as a removable data recording medium such as a CDROM drive, a DVD drive, a Blu-Ray (registered trademark) disk drive, and the like.
  • the CDD 30 is used, for example, when reading the above application program from removable media.
  • 31 is a removable medium such as a CDROM disk, DVD, Blu-Ray disk, etc., which is read by the CDD 30 .
  • the removable medium may be a magneto-optical recording medium (eg, MO), a semiconductor recording medium (eg, memory card), or the like. It is also possible to store the application programs and data stored in the HDD 29 in the removable medium 31 and use them.
  • Reference numeral 20 denotes a transmission bus (address bus, data bus, input/output bus, and control bus) for connecting the units described above.
  • FIG. 8 is a sequence diagram illustrating processing executed by the autonomous mobile body control system according to the first embodiment
  • FIG. 9 is a sequence diagram following FIG. 8
  • FIG. 10 is a sequence diagram following FIG. is.
  • 8 to 10 show the processing executed by each device from when the user inputs position information to the user interface 11 until the current position information of the autonomous mobile body 12 is received. 8 to 10 are executed by the computers in the control units 10 to 15 executing the computer programs stored in the memory.
  • step S201 the user uses the user interface 11 to access the WEB page provided by the system control device 10.
  • step S202 the system control device 10 displays the position input screen as described with reference to FIG. 2 on the display screen of the WEB page.
  • step S203 as described with reference to FIG. 2, the user selects an autonomous mobile object (mobility) and inputs position information (hereinafter referred to as position information) indicating a departure point, a transit point, and an arrival point.
  • position information position information indicating a departure point, a transit point, and an arrival point.
  • the location information may be a word (hereinafter referred to as a location word) that specifies a specific location such as a building name, station name, or address, or a specific location on a map displayed on a web page as a point (hereinafter referred to as a point). It is also possible to use a method to determine a location of a building name, station name, or address. It is also possible to use a method to determine a location word.
  • step S204 the system control device 10 saves the type information of the selected autonomous mobile body 12 and the input information such as the input position information.
  • the system controller 10 stores the position word.
  • the system control device 10 searches for the latitude and longitude corresponding to the point based on the simple map information stored in the position/route information management unit 10-3, Store latitude and longitude.
  • step S205 the system control device 10 designates the type of route that can be moved (hereinafter referred to as route type) from the mobility type (moving body type) of the autonomous mobile body 12 designated by the user. Then, in step S206, it is transmitted to the route determination device 13 together with the positional information.
  • route type the type of route that can be moved
  • the type of mobility is, for example, a legally distinguished type of mobile object, such as a car, bicycle, or drone.
  • the route types are, for example, general roads, expressways, motorways, predetermined sidewalks, roadside strips of general roads, and bicycle lanes.
  • the route type is specified as a general road, expressway, or car-exclusive road.
  • the mode of mobility is a bicycle, a predetermined sidewalk, a side strip of a general road, a bicycle lane, etc. are specified.
  • step S207 the route determination device 13 inputs the received positional information into the owned map information as a departure point, a transit point, and an arrival point. If the positional information is a positional word, a search (preliminary search) is performed using the map information using the positional word, and the relevant latitude/longitude information is input. If the position information is latitude/longitude information, it is used by inputting it into the map information as it is. Furthermore, the route determination device 13 may search for routes in advance.
  • step S208 the route determination device 13 searches for a route from the departure point to the arrival point via the intermediate points.
  • the route to be searched is searched according to the route type. If a route has been searched in advance in step S208, the route searched in advance is appropriately changed based on the route type.
  • step S209 the route determination device 13 outputs, as a result of the search, a route from the departure point to the arrival point via the waypoints (hereinafter referred to as route information) in GPX format (GPS eXchange Format), and system control is performed.
  • route information a route from the departure point to the arrival point via the waypoints
  • GPX format GPS eXchange Format
  • GPX format files are mainly divided into three types: waypoints (point information without order), routes (point information with order with time information added), and tracks (collection of multiple point information: trajectories). is configured to
  • latitude/longitude is described as the attribute value of each point information
  • altitude, geoid height, GPS reception status/accuracy, etc. are described as child elements.
  • the minimum element required for a GPX file is latitude/longitude information for a single point, and any other information is optional.
  • a route is output as route information, and is a set of point information consisting of latitude/longitude having an order relationship. Note that the route information may be in another format as long as it satisfies the above requirements.
  • FIG. 11(A) is a diagram showing latitude/longitude information of the earth
  • FIG. 11(B) is a perspective view showing the predetermined space 100 in FIG. 11(A).
  • the center of the predetermined space 100 is defined as the center 101.
  • FIG. 12 is a diagram schematically showing spatial information in the space 100. As shown in FIG.
  • the format divides the three-dimensional space of the earth into spaces for each predetermined unit volume determined by the range starting from latitude/longitude/height.
  • a unique identifier is added to the space to make it manageable.
  • the space 100 is displayed as a predetermined three-dimensional space.
  • a space 100 is defined by a center 101 of 20 degrees north latitude, 140 degrees east longitude, and height (altitude, altitude) H, and the width in the latitudinal direction is defined as D, the width in the longitudinal direction as W, and the width in the height direction as T. is a partitioned space. In addition, it is one space obtained by dividing the space of the earth into spaces determined by ranges starting from latitude/longitude/height.
  • each of the arranged divided spaces has its horizontal position defined by latitude/longitude, overlaps in the height direction, and the position in the height direction is defined by height.
  • the center 101 of the divided space is set as the starting point of latitude/longitude/height in FIG. 11B, the starting point is not limited to this. It is good as Also, the shape may be a substantially rectangular parallelepiped, and when considering the case of laying on a spherical surface such as the earth, it is better to set the top surface of the rectangular parallelepiped slightly wider than the bottom surface, so that it can be arranged without gaps.
  • information on the types of objects that exist or can enter the range of the space 100 and time limits are associated with unique identifiers.
  • the formatted spatial information is stored in chronological order from the past to the future. Note that, in the present embodiment, associating and linking are used in the same meaning.
  • the conversion information holding device 14 associates with the unique identifier the spatial information regarding the types of objects that can exist or can enter a three-dimensional space defined by latitude/longitude/height and the time limit, and formats the format database 14-. Saved in 4.
  • the spatial information is updated at predetermined update intervals based on information supplied by information supply means such as an external system (for example, the sensor node 15) communicatively connected to the conversion information holding device 14. Then, the information is shared with other external systems communicably connected to the conversion information holding device 14 .
  • information supply means such as an external system (for example, the sensor node 15) communicatively connected to the conversion information holding device 14. Then, the information is shared with other external systems communicably connected to the conversion information holding device 14 .
  • non-unique identifiers may be used instead of unique identifiers.
  • information on operators/individuals who have external systems, information on how to access detection information acquired by external systems, and specification information on detection information such as metadata/communication format of detection information are also used as spatial information, as unique identifiers. can be associated and managed.
  • information about the type of an object that can exist or enter a three-dimensional space defined by latitude/longitude/height and the time limit (hereinafter referred to as spatial information) is associated with a unique identifier. formatted and stored in the database. Space-time can be managed by formatted spatial information.
  • latitude/longitude/height will be used as a coordinate system that defines the position of the space (voxel).
  • the coordinate system is not limited to this, and various coordinate systems can be used, such as an XYZ coordinate system having arbitrary coordinate axes, or using MGRS (Military Grid Reference System) as horizontal coordinates. .
  • a pixel coordinate system that uses the pixel positions of an image as coordinates, or a tile coordinate system that divides a predetermined area into units called tiles and expresses them by arranging them in the X/Y directions.
  • the conversion information holding device 14 of the first embodiment executes a formatting step of formatting and saving information about update intervals of spatial information in association with unique identifiers.
  • the update interval information formatted in association with the unique identifier may be the update frequency, and the update interval information includes the update frequency.
  • step S210 the system control device 10 confirms the interval between each piece of point information in the received route information. Then, the position point group data is created by matching the interval of the point information with the interval between the starting point positions of the divided spaces defined by the format.
  • the system control device 10 thins out the point information in the route information according to the interval of the starting point positions of the divided spaces, and uses it as position point cloud data. do. Also, if the interval of point information is larger than the interval between the starting point positions of the divided spaces, the system control device 10 interpolates the point information within a range that does not deviate from the route information to obtain position point group data.
  • step S211 in Fig. 9 the system control device 10 transmits the latitude/longitude information of each point information of the position point cloud data to the conversion information holding device 14 in the order of the route.
  • step S212 the conversion information holding device 14 searches the format database 14-4 for a unique identifier corresponding to the received latitude/longitude information, and transmits it to the system control device 10 in step S213.
  • step S214 the system control device 10 arranges the received unique identifiers in the same order as the original position point cloud data, and stores them as route information using the unique identifiers (hereinafter referred to as format route information).
  • the system control device 10 as the route generation means acquires the spatial information from the database of the conversion information holding device 14, and based on the acquired spatial information and the type information of the mobile object, Generating route information about travel routes.
  • FIG. 13(A) is an image diagram of route information displayed as map information
  • FIG. 13(B) is an image diagram of route information using position point cloud data displayed as map information
  • FIG. 13(C) is an image diagram using unique identifiers.
  • FIG. 10 is an image diagram showing route information as map information;
  • 120 is route information
  • 121 is a non-movable area through which the autonomous mobile body 12 cannot pass
  • 122 is a movable area where the autonomous mobile body 12 can move.
  • the route information 120 generated by the route determining device 13 based on the positional information of the departure point, waypoints, and arrival points designated by the user passes through the departure point, waypoints, and arrival points, and is displayed on the map information. It is generated as a route passing over the movable area 122 .
  • 123 is a plurality of pieces of position information on route information. After acquiring the route information 120 , the system control device 10 generates position information 123 arranged at predetermined intervals on the route information 120 .
  • the position information 123 can be represented by latitude/longitude/height, respectively, and this position information 123 is called position point cloud data in the first embodiment. Then, the system control device 10 transmits the position information 123 (latitude/longitude/height of each point) one by one to the conversion information holding device 14 and converts them into unique identifiers.
  • 124 is positional space information in which the positional information 123 is converted into a unique identifier one by one, and the spatial range defined by the unique identifier is represented by a rectangular frame.
  • the location space information 124 is obtained by converting the location information into a unique identifier.
  • the route represented by the route information 120 is converted into continuous position space information 124 and represented.
  • Each piece of position space information 124 is associated with information about the types of objects that can exist or enter the space and the time limit. This continuous position space information 124 is called format route information in the first embodiment.
  • step S215 the system control device 10 downloads the spatial information associated with each unique identifier of the format path information from the conversion information holding device 14.
  • step S216 the system control device 10 converts the space information into a format that can be reflected in the three-dimensional map of the cyberspace of the autonomous mobile body 12, and information indicating the positions of multiple objects (obstacles) in the predetermined space.
  • the cost map may be created with respect to the space of all routes in the format route information at first, or may be created in a form divided by fixed areas and updated sequentially.
  • step S217 the system control device 10 associates the format route information and the cost map with the unique identification number (unique identifier) assigned to the autonomous mobile body 12 and stores them.
  • the autonomous mobile body 12 monitors (hereinafter, polling) its own unique identification number via the network at predetermined time intervals, and downloads the associated cost map in step S218.
  • the autonomous mobile body 12 reflects the latitude/longitude information of each unique identifier of the format route information as route information on the three-dimensional map of cyberspace created by itself.
  • step S220 the autonomous mobile body 12 reflects the cost map on the three-dimensional map of cyberspace as obstacle information on the route. If the cost map is created in a form that is divided at regular intervals, after moving the area in which the cost map was created, download the cost map of the next area and update the cost map.
  • step S221 the autonomous mobile body 12 moves along the route information while avoiding the objects (obstacles) entered in the cost map. That is, movement control is performed based on the cost map.
  • step S222 the autonomous mobile body 12 moves while performing object detection, and moves while updating the cost map using the object detection information if there is a difference from the cost map. Also, in step S223, the autonomous mobile body 12 transmits difference information from the cost map to the system control device 10 together with the corresponding unique identifier.
  • the system control device 10 that has acquired the difference information between the unique identifier and the cost map transmits the spatial information to the conversion information holding device 14 in step S224 of FIG. Update the spatial information of the unique identifier.
  • the content of the spatial information updated here does not directly reflect the difference information from the cost map, but is abstracted by the system control device 10 and then sent to the conversion information holding device 14 . Details of the abstraction will be described later.
  • step S226 the autonomous mobile body 12 moving based on the format route information informs the system controller 10 of the space it is currently passing through each time it passes through the divided space associated with each unique identifier. Send the associated unique identifier.
  • the system control device 10 grasps the current position of the autonomous mobile body 12 on the format route information.
  • the system control device 10 can grasp where the autonomous mobile body 12 is currently located in the format route information. Incidentally, the system control device 10 may stop holding the unique identifier of the space through which the autonomous mobile body 12 has passed, thereby reducing the holding data capacity of the format route information.
  • step S227 the system control device 10 creates the confirmation screen 50 and the map display screen 60 described with reference to FIGS. do.
  • the system control device 10 updates the confirmation screen 50 and the map display screen 60 each time a unique identifier indicating the current position is transmitted from the autonomous mobile body 12 to the system control device 10 .
  • the sensor node 15 saves the detection information of the detection range, abstracts the detection information in step S229, and transmits it as spatial information to the conversion information holding device 14 in step S230. Abstraction is information such as whether or not an object exists, or whether or not the state of existence of the object has changed, and is not detailed information about the object.
  • step S231 the conversion information holding device 14 stores the spatial information, which is the abstracted detection information, in association with the unique identifier of the position corresponding to the spatial information. This results in spatial information being stored in one unique identifier in the format database.
  • the external system uses the spatial information in the conversion information holding device 14 to obtain the detected information in the sensor node 15 via the conversion information holding device 14. is obtained and utilized.
  • the conversion information holding device 14 also has a function of connecting the communication standards of the external system and the sensor node 15 .
  • the conversion information holding device 14 has a function of connecting data of multiple devices with a relatively small amount of data.
  • steps S215 and S216 of FIG. 9 when the system control device 10 needs detailed object information when creating a cost map, detailed information is sent from an external system storing detailed detection information of spatial information. should be downloaded and used.
  • the sensor node 15 updates the spatial information on the route of the format route information of the autonomous mobile body 12 .
  • the sensor node 15 acquires detection information in step S232 of FIG. 10, generates abstracted spatial information in step S233, and transmits it to the conversion information holding device 14 in step S234.
  • the conversion information holding device 14 stores the spatial information in the format database 14-4 in step S235.
  • the system control device 10 checks changes in the spatial information in the managed format path information at predetermined time intervals, and downloads the spatial information in step S236 if there is a change. Then, in step S237, the cost map associated with the unique identification number assigned to the autonomous mobile body 12 is updated.
  • step S2308 the autonomous mobile body 12 recognizes the update of the cost map by polling and reflects it in the three-dimensional map of cyberspace created by itself.
  • the autonomous mobile body 12 can recognize in advance a change in the route that the self cannot recognize, and can respond to the change.
  • a unique identifier is transmitted in step S240.
  • the system control device 10 Upon recognizing the unique identifier, the system control device 10 displays an arrival indication on the user interface 11 in step S241, and terminates the application.
  • Embodiment 1 it is possible to provide a digital architecture format and an autonomous mobile body control system using the format as described above.
  • the format database 14-4 stores information (spatial information) about the types of objects that can exist or enter the space 100 and the time limit from the past. It is stored in chronological order such as the future.
  • the spatial information is updated based on information input from an external sensor or the like communicably connected to the conversion information holding device 14, and is shared with other external systems that can be connected to the conversion information holding device 14. .
  • the type information of objects in the space is information that can be obtained from map information, such as roadways, sidewalks, and bicycle lanes on roads.
  • map information such as roadways, sidewalks, and bicycle lanes on roads.
  • information such as the traveling direction of mobility on a roadway, traffic regulations, etc. can also be defined as type information.
  • type information it is also possible to define type information in the space itself.
  • the conversion information holding device 14 can be connected to a system control device that manages information on roads and a system control device that manages information on sections other than roads.
  • the system control device 10 can transmit position point cloud data collectively representing the position information 123 of FIG. Similarly, a system control device that manages information on roads and a system control device that manages information on sections other than roads can also transmit corresponding data to the conversion information holding device 14 .
  • the corresponding data is the position point cloud data information managed by the system control device that manages road information and the system control device that manages information on sections other than roads.
  • Each point of the position point cloud data is hereinafter referred to as a position point.
  • the space information update interval differs according to the type of object existing in the space. That is, when the type of object existing in the space is a moving object, the length of time is set to be shorter than when the type of object existing in the space is not a moving object. Also, when the type of the object existing in the space is a road, the type of the object existing in the space is made shorter than in the case of the partition.
  • the update interval of the space information about each object should be different according to the type of each object (eg moving body, road, section, etc.). Spatial information about the state and time of each of a plurality of objects existing in the space is associated with the unique identifier, formatted and stored. Therefore, the load for updating spatial information can be reduced.
  • step S226 of FIG. 10 the operation of the system control device 10 to grasp the current position of the autonomous mobile body 12 on the format route information based on the unique identifier information received from the autonomous mobile body 12 has been described.
  • whether or not the autonomous mobile body 12 has passed through the divided space associated with each unique identifier is determined by a self-position detection function such as GPS in the detection unit 12-1.
  • the accuracy may decrease when traveling in places where GPS radio waves are difficult to receive, such as underground or tunnels.
  • an autonomous mobile body that does not have a self-position detection function such as GPS cannot tell the system control device where it is currently in the format route information, and the autonomous mobile body control in the first embodiment The system could not be used.
  • the conversion information holding device 14 as formatting means associates information characterizing the space (hereinafter referred to as key frame information) with the unique identifier of the position to which the information corresponds (string registered (saved or recorded).
  • key frame information which will be described later in detail, is, for example, information indicating an object for which the same object does not exist around the associated space.
  • the autonomous mobile body 12 can estimate its own position. As described in FIG. 4 of the first embodiment, the autonomous mobile body 12 can detect an object from a captured image using machine learning or the like. Therefore, by detecting an object included in the keyframe information with this object detection function, it is possible to estimate (specify) that the detection point is the position corresponding to the unique identifier associated with the keyframe information. is.
  • keyframe information to match the operation of detecting an object included in keyframe information by using a captured image or the like and specifying a position corresponding to a unique identifier associated with the keyframe information.
  • the autonomous mobile body 12 By “collating key frame information", the autonomous mobile body 12 identifies its own position even if it is a point where GPS radio waves are difficult to receive or even if the autonomous mobile body does not have a self-position detection function. can do.
  • the self position is the position corresponding to the unique identifier to which the key frame information is linked, which is specified by the autonomous mobile body 12 . Therefore, the autonomous mobile body 12 can transmit the position information to the system control device that controls the autonomous mobile body.
  • keyframe information is one type of spatial information.
  • the format database 14-4 stores information (spatial information) regarding the state and time of objects existing within the space 100 in chronological order from the past to the future.
  • the spatial information is updated by information input by an external system or the like communicatively connected to the conversion information holding device 14, and information is shared with other external systems communicably connected to the conversion information holding device 14. .
  • the transformation information holding device 14 records keyframe information of objects in space.
  • FIG. 14 is a functional block diagram related to key frame information storage processing according to the second embodiment, in which the internal configurations of the map information extraction system 901, the sensor node 15, the autonomous moving body 612, and the conversion information holding device 14 are shown as functional blocks. showing.
  • the autonomous moving body 612 includes a detection unit 612-1, a control unit 612-2, a direction control unit 612-3, an information storage unit (memory/HDD) 612-4, a network connection unit 612-5, and a drive unit 612. -6.
  • 612-1 to 612-6 have the same configuration as 12-1 to 12-6, which is the configuration of the autonomous mobile body 12 described in FIG. 4, so detailed description will be omitted.
  • the autonomous mobile body 612 has an object detection function and a distance measurement function like the autonomous mobile body 12 described in FIG.
  • the map information extraction system 901 includes a map information management unit 901-1, a control unit 901-2, an information extraction unit 901-3, an information storage unit (memory/HDD) 901-4, and a network connection unit 901. -5.
  • the map information management unit 901-1 holds map information of the three-dimensional space on the earth.
  • the information extraction unit 901-3 has a function of extracting information based on set conditions such as location information, place names, facility names, etc. from the map information held by the map information management unit 901-1.
  • the control unit 901-2 controls map information extraction in the map information extraction system 901, and sets conditions for extraction by the information extraction unit 901-3. Then, for example, a function of storing information extracted under conditions satisfying requirements as key frame information in an information storage unit (memory/HDD) 901-4 and transmitting it to the conversion information holding device 14 through a network connection unit 901-5. control.
  • the conversion information holding device 14 and the sensor node 15 in FIG. 14 have the same configurations as those described in FIG. 4, so description thereof will be omitted.
  • the map information extraction system 901 extracts object information that satisfies requirements as key frame information from the map information, and provides (outputs) the information to the conversion information holding device 14 . do.
  • FIG. 15 is a sequence diagram for explaining keyframe information storage processing according to the second embodiment.
  • Map information extraction system 901, sensor node 15, autonomous moving body 612, and conversion information storage device related to keyframe information storage are shown in FIG. 14 indicates the processing to be executed.
  • the computer in the control units 14, 15, 612 and 901 executes the computer program stored in the memory to perform the operation of each step in the sequence of FIG.
  • the map information extraction system 901 as an example of an external system communicably connected to the conversion information holding device 14 extracts key frame information from the map information and stores it in the format database 14-4 with reference to FIG. explain.
  • step S401 the map information extraction system 901 extracts information on objects that satisfy the requirements for key frame information from a predetermined range of owned map information.
  • the object information that satisfies the requirements of the key frame information extracted here is information indicating an object that does not have the same object in its surroundings.
  • step S402 the map information extraction system 901 stores (stores) the extracted key frame information in the information storage unit (memory/HDD) 901-4 in association with the position information of the object. .
  • step S403 the map information extraction system 901 transmits the linked key frame information and position information to the conversion information holding device 14.
  • the conversion information holding device 14 determines the unique identifier corresponding to the transmitted position information, and stores the key frame information corresponding to the unique identifier in the format database 14-4.
  • step S404 functions as a formatting step for formatting and registering the key frame information as spatial information in association with the unique identifier.
  • the sensor node 15 is an external system such as a video surveillance system such as a roadside camera unit, and the information storage unit (memory/HDD) 15-3 stores position information of the sensor node itself. It is
  • the sensor node 15 uses the object detection function and the distance measurement function of the detection unit 15-1 to collect information such as image information, feature point information, and position information of an object existing in an area detectable by itself. to detect Then, in step S412, the sensor node 15 stores the detected information in the information storage unit (memory/HDD) 15-3.
  • the information storage unit memory/HDD
  • step S413 the control unit 15-2 of the sensor node 15 extracts object information that satisfies the requirements for keyframe information as keyframe information from the object image information and feature point information. Then, in step S414, the control unit 15-2 of the sensor node 15 associates the key frame information with the position information of the object that satisfies the requirements of the key frame information, and stores the information in the information storage unit (memory/HDD) 15-3. Record.
  • An object that satisfies the requirements of the keyframe information extracted here is, for example, an object that does not have the same object in its vicinity, as in the case of extracting information from map information. Other objects that meet the requirements for keyframe information are described below.
  • step S415 the sensor node 15 transmits the mutually associated (linked) key frame information and position information to the conversion information holding device 14.
  • conversion information holding device 14 determines a unique identifier corresponding to the position information transmitted in step S415, and records key frame information corresponding to the unique identifier in format database 14-4.
  • the key frame information corresponding to the unique identifier can be stored in the format database 14-4 from the information detected by the sensor node 15.
  • step S421 the autonomous mobile body 612 detects information such as image information, feature point information, position information, etc. of an object existing in an area detectable by itself by its object detection function and distance measurement function.
  • step S422 the autonomous mobile body 612 stores the information detected in step S421 in the information storage unit (memory/HDD) 612-4. Furthermore, in step S423, the control unit 612-2 of the autonomous mobile body 612 extracts the information of the object that satisfies the requirements of the keyframe information as the keyframe information from the image information and feature point information of this object.
  • step S424 the key frame information and the position information of the object that satisfies the requirements of the key frame information are associated (linked) and stored in the control unit 612-2.
  • the objects that meet the requirements of the keyframe information extracted here are objects that do not have the same object in their surroundings, just like when extracting information from map information.
  • an object that satisfies the requirements of key frame information there is also "image data of scenery at that point" photographed by the autonomous mobile body 612, and the like.
  • step S425 the autonomous mobile body 612 transmits the mutually associated (linked) key frame information and position information to the conversion information holding device 14.
  • step S426 the conversion information holding device 14 determines the unique identifier corresponding to the transmitted position information, and stores the key frame information corresponding to the unique identifier in the format database 14-4. In this way, the key frame information corresponding to the unique identifier can be stored in the format database 14-4 from the information detected by the autonomous mobile body 612.
  • the keyframe information is stored in association with (linked to) a predetermined unique identifier in the format database 14-4.
  • this key frame information is updated at appropriate timing.
  • keyframe information stored from map information be updated when map information is updated.
  • the keyframe information stored by the sensor node 15 in step S424 it is determined that the information in the range detectable by the sensor node 15 changes and that change affects the keyframe information. It should be updated when That is, it is desirable that the keyframe information be updated, for example, when an object stored as keyframe information moves.
  • the keyframe information stored by the autonomous mobile body 612 it is desirable to update the keyframe information stored by the autonomous mobile body 612 when, for example, the same point is replaced by another new keyframe information associated with it.
  • a plurality of pieces of keyframe information may be linked to one divided space. In this case, information indicating an object that satisfies the requirements of the keyframe information is added to the keyframe information.
  • FIG. 16 It is assumed that on a route where the reception strength of GPS radio waves is low, the accuracy of the self-position detection function using GPS that the autonomous mobile body 612 originally has decreases, and an error of about ⁇ 5 m occurs, for example. It is also assumed that the direction detection function using the geomagnetic sensor does not interfere with the route where the GPS radio wave reception strength is low, and the direction of the geomagnetism of the autonomous moving body 612 is correctly detected.
  • FIG. 16 is an image diagram of the format route information 900 of the autonomous mobile body 612 according to the second embodiment. Formatted route information 900 in FIG. 16 is set so that the route passes through divided spaces 902-1 and 902-2 in which key frame information is stored.
  • the keyframe information stored in the divided space 902-1 is the information indicating the guide sign "XX intersection", and the keyframe information stored in the divided space 902-2 is the signboard of "YY post office”. It is assumed that it is information that indicates It is also assumed that the autonomous mobile body 612 and the system control device 10 that controls the autonomous mobile body 612 know the position of the autonomous mobile body 612 at the departure point.
  • FIG. 17 is a sequence diagram explaining the operation of the autonomous mobile body 612 that moves using the format route information 900 of FIG. 16 and the system control device 10 that controls the autonomous mobile body 612. Each step of the sequence of FIG. 17 is performed by the computer in each of the control units 10 and 12 executing the computer program stored in the memory.
  • step S431 the system control device 10 instructs the autonomous mobile body 612 at the starting point shown in FIG. 16 to start moving rightward on the page of FIG.
  • step S432 the autonomous mobile body 612 that has received the instruction starts moving rightward on the paper surface of FIG. 16 based on the information obtained from the direction detection function using the geomagnetic sensor.
  • step S433 when the autonomous mobile body 612 reaches a position of about ⁇ 5 m near the divided space 902-1, the imaging means of the autonomous mobile body 612 detects the guide sign "XX intersection". Based on the detected guide sign, the key frame information stored in the divided space 902-1 is collated.
  • step S434 the autonomous mobile body 612 calculates the distance to the detected guide sign marked "XX intersection" using the distance measuring function. Subsequently, in step S435, the autonomous mobile body 612 calculates its own relative position information from the divided space 902-1 based on the distance calculation result. Then, in step S436, based on this position information, the self position on the format route information 900 is reflected in the three-dimensional map of the cyberspace created by the self.
  • the autonomous mobile body 612 also transmits this position information to the system control device 10 in step S437.
  • the system control device 10 grasps the current position of the autonomous mobile body 612 on the format route information 900 based on the position information received from the autonomous mobile body 612.
  • FIG. The processing of steps S433 to S438 is continuously performed until the autonomous mobile body 612 reaches the divided space 902-1.
  • step S439 the autonomous mobile body 612 turns left based on the three-dimensional map of cyber space created in step S436 and the direction detection function (upward direction on the paper surface of FIG. 16 to).
  • step S440 when the autonomous mobile body 612 reaches a position of about ⁇ 5 m near the divided space 902-2, the imaging means of the autonomous mobile body 612 detects the "YY Post Office" signboard. Then, the autonomous moving body 612 collates the keyframe information stored in the divided space 902-1 based on the detected signboard.
  • step S441 the autonomous moving body 612 calculates the distance to the detected "YY Post Office" signboard by the distance measurement function. Subsequently, in step S442, the autonomous mobile body 612 calculates its own relative position information from the divided space 902-2 based on the distance calculation result. Then, in step S443, it is reflected in the three-dimensional map of the cyberspace created by itself, and is also transmitted to the system control device 10 in step S444.
  • step S445 based on the relative position information from the divided space 902-2 to the autonomous mobile body 612 received from the autonomous mobile body 612, the system control device 10 of the autonomous mobile body 612 on the format route information 900 Know your current location.
  • the processing of steps S440 to S445 is also continuously performed until the autonomous mobile body 612 arrives at the point of the key frame information.
  • the autonomous mobile body 612 arrives at the point of the key frame information, the autonomous mobile body 612 and the system controller 10 determine that the autonomous mobile body 612 has reached the divided space 902-2 on the format route information 900, that is, has reached the destination. you can figure out what you did. Then, the system control device 10 issues an instruction to end the movement to the autonomous mobile body 612 . Then, in step S447, the autonomous mobile body 612 ends its movement.
  • keyframe information information indicating an object that does not have the same object in its surroundings
  • An object around which the same object does not exist is, for example, an object including a proper noun.
  • Guide signs and signboards on which proper nouns such as "XX intersection” and "YY post office” are written are considered to exist only at that point in the area.
  • objects such as statues (including statues and sculptures) such as the statue of Hachiko, the faithful dog in front of Shibuya Station, and characteristic buildings such as Tokyo Tower are also objects that satisfy the keyframe information requirements.
  • objects such as statues (including statues and sculptures) such as the statue of Hachiko, the faithful dog in front of Shibuya Station, and characteristic buildings such as Tokyo Tower are also objects that satisfy the keyframe information requirements.
  • an object such as a "post” or a "traffic light” having similar shapes scattered in a three-dimensional space can be an object that satisfies the requirements of key frame information.
  • a "post" is installed at a point where there is no other post within a radius of 400m from a certain post. Therefore, even if there is an error of about ⁇ 100 m in the accuracy of the self-position detection function using GPS that the autonomous mobile body has, it will not be confused with another post.
  • Existence information of objects whose point cannot be specified completely depends on the self-location detection function of the user and how to use it. Therefore, the autonomous mobile body needs to determine whether the key frame information is information that can be used by itself. Therefore, it is desirable that "existence information of an object whose point cannot be completely specified” and "existence information of an object whose point can be specified completely” are classified even if they are the same key frame information.
  • existence information of an object whose point can be completely specified is class A keyframe information
  • existence information of an object that does not have the same object within a range of ⁇ 10 km is class B keyframe information. Attach class information.
  • class information such as "existence information of an object that does not have the same object within a range of ⁇ 200 m" is associated with class C key frame information. This makes it possible to reduce the burden of determining whether the information is usable.
  • object information that satisfies the requirements for keyframe information is "image data of the landscape at that point" captured by the autonomous mobile body 612 or the like. That is, the keyframe information includes image data captured at a point corresponding to the space.
  • This image data is compared with the actual scenery in front of the eyes by means of imaging means of an autonomous mobile body, etc., and if it is possible to analyze whether the matching rate of the feature points is above a certain level, the key frame information is collated. can be specified.
  • the parameter data includes shooting conditions when image data is shot in the target space.
  • the parameter data includes the position information of the point where the image was taken. If you know from which point the image data was captured, for example, by adding an action such as turning the autonomous mobile side in that direction or rotating the camera, the matching rate of the feature points can be adjusted and the difficulty of matching can be lowered. This is because
  • information about the lighting conditions when the image data was shot can be parameter data. That is, the shooting conditions include information about the brightness at the time of shooting. This is because the lighting conditions at the time of photographing greatly affect the image quality and color of the entire image, and the edges and shadows of objects in the image.
  • the state of lighting refers to, for example, the date and time when the photograph was taken, how strong the light was at the time of photographing and from which direction, whether there was a light source such as a street nearby, and if there was a streetlight, At least one of from which direction, color (color temperature) and illuminance is included.
  • an image captured by an autonomous mobile body can be processed such as image correction so as to be closer to the conditions of the parameter data, and the degree of matching can be increased.
  • the parameter data may be stored as estimated values.
  • An estimated value is, for example, an estimated value related to brightness.
  • data with similar conditions such as shooting time and topography may be stored and used as simple parameter data. .
  • object information that satisfies the requirements for keyframe information is 3D data of the landscape at or from that point. If the autonomous mobile body has a function that can determine the shape of the object, such as LiDAR (Light Detection And Ranging), it will be possible to collate the 3D data stored as key frame information. That is, the keyframe information includes 3D data at points corresponding to the target space.
  • LiDAR Light Detection And Ranging
  • the keyframe information can be verified. It becomes possible to specify the location.
  • the 3D data stored as key frame information may be only the shape of a characteristic object that is easy to analyze, instead of the data of all the points.
  • LiDAR has the feature that its ranging performance depends on the surface condition of the target. Therefore, by storing the surface information of the object having a characteristic shape in the 3D data, which is the key frame information, as the parameter data in association with the 3D data, the degree of matching can be increased.
  • the formatting means can improve the degree of matching by enabling parameter data for matching key frame information to be registered as spatial information in association with key frame information.
  • the parameter data also includes surface information of the 3D data object. If it is difficult to collect the parameter data here in an accurate form, the parameter data may be estimated values estimated from conditions such as the shape of the object and topography.
  • the difficulty of matching keyframe information largely depends on the parameter data, but if the parameter data is an estimated value, it is difficult to accurately estimate the difficulty of matching the keyframe information in advance.
  • the mobile object of this embodiment is not limited to an autonomous mobile object such as an AGV (Automated Guided Vehicle) or an AMR (Autonomous Mobile Robot).
  • it can be any mobile device that moves, such as automobiles, trains, ships, airplanes, robots, and drones.
  • part of the control system of the present embodiment may or may not be mounted on those moving bodies.
  • this embodiment can be applied to remote control of a moving object.
  • the present invention may be realized by supplying a storage medium recording software program code (control program) for realizing the functions of the above-described embodiments to a system or device. It is also achieved by the computer (or CPU or MPU) of the system or apparatus reading and executing the computer-readable program code stored in the storage medium. In that case, the program code itself read from the storage medium implements the functions of the above-described embodiments, and the storage medium storing the program code constitutes the present invention.

Abstract

Provided is a control system capable of stably detecting its own position, wherein: a unique identifier is imparted to a three-dimensional space defined by a prescribed coordinate system such as latitude/longitude/height, for example; included is a formatting means that associates, with the unique identifier, space information pertaining to the time and the state of an object present in the space, and that formats and saves the same; and the formatting means makes it possible to register key frame information as the space information.

Description

制御システム、制御方法、及び記憶媒体Control system, control method, and storage medium
 本発明は、3次元の空間に関する制御システム、制御方法、及び記憶媒体等に関するものである。  The present invention relates to a control system, a control method, a storage medium, etc. related to a three-dimensional space.
 近年、自律走行モビリティや空間認識システムなどの技術革新に伴い、異なる組織や社会の構成員の間でデータやシステムをつなぐ全体像(以下、デジタルアーキテクチャ)の開発が進んでいる。 In recent years, along with technological innovations such as autonomous driving mobility and spatial recognition systems, the development of an overall picture (hereinafter referred to as digital architecture) that connects data and systems between different organizations and members of society is progressing.
 特許文献1では、ユーザの提供する時空間管理データに従って単一のプロセッサが時空間領域を時間及び空間で分割して、複数の時空間分割領域を生成している。又、時空間分割領域の時間及び空間の近傍性を考慮して、複数の時空間分割領域の各々を一意に識別するための、一次元の整数値で表現される識別子を割り当てている。 In Patent Document 1, a single processor divides a spatio-temporal area in time and space according to spatio-temporal management data provided by a user to generate a plurality of spatio-temporal divided areas. Also, in consideration of the temporal and spatial proximity of the spatio-temporal segments, an identifier expressed by a one-dimensional integer value is assigned to uniquely identify each of the plurality of spatio-temporal segments.
 そして、その識別子が近い時空間分割領域のデータが記憶装置上で近くに配置されるように、時系列データの配置を決定する時空間データ管理システムが開示されている。 Then, a spatio-temporal data management system is disclosed that determines the arrangement of time-series data so that data in spatio-temporal divided areas with similar identifiers are arranged closely on the storage device.
特開2014-002519号公報JP 2014-002519 A
 しかしながら、上記特許文献1においては、生成された領域に関するデータを識別子で把握できるのはそれを生成したプロセッサ内でのみである。よって、異なるシステムのユーザがその空間分割領域の情報を活用することができない。また、システムユーザーが時空間分割領域の情報を使用するためには、自己位置を正確に検出する手段が必要であった。 However, in Patent Document 1, it is only within the processor that generated the data that the data regarding the generated area can be grasped by the identifier. Therefore, users of different systems cannot utilize the information of the spatial division area. In addition, in order for the system user to use the information of the spatio-temporal segmented area, means for accurately detecting his/her own position was required.
 そこで、本発明は、自己位置を安定的に検出可能な制御システムを提供する。 Therefore, the present invention provides a control system that can stably detect its own position.
 本発明に係る制御システムは、
 所定の座標系によって定義される3次元の空間に固有識別子を付与し、前記空間に存在する物体の状態と時間に関する空間情報を前記固有識別子と関連付けてフォーマット化して保存するフォーマット化手段を有し、
 前記フォーマット化手段は、前記空間情報として、キーフレーム情報を登録する。
A control system according to the present invention includes:
A formatting means for assigning a unique identifier to a three-dimensional space defined by a predetermined coordinate system, and for formatting and storing spatial information relating to the state and time of an object existing in the space in association with the unique identifier. ,
The formatting means registers key frame information as the spatial information.
 本発明によれば、自己位置を安定的に検出可能な制御システムを提供することが出来る。 According to the present invention, it is possible to provide a control system that can stably detect its own position.
実施形態1にかかる自律移動体制御システムの全体構成例を示す図である。1 is a diagram showing an overall configuration example of an autonomous mobile body control system according to a first embodiment; FIG. (A)はユーザが位置情報を入力する際の入力画面の例を示す図、(B)は使用する自律移動体を選択するための選択画面の例を示す図である。(A) is a diagram showing an example of an input screen when a user inputs position information, and (B) is a diagram showing an example of a selection screen for selecting an autonomous mobile body to be used. (A)は自律移動体の現在位置を確認するための画面の例を示す図、(B)は自律移動体の現在位置を確認する際の地図表示画面の例を示す図である。(A) is a diagram showing an example of a screen for confirming the current position of an autonomous mobile body, and (B) is a diagram showing an example of a map display screen when confirming the current position of an autonomous mobile body. 図1の10~15の内部構成例を示した機能ブロック図である。2 is a functional block diagram showing an internal configuration example of 10 to 15 in FIG. 1; FIG. (A)は、現実世界における自律移動体12とその周辺の地物情報として存在する柱99の空間的位置関係を示した図、(B)は自律移動体12と柱99をP0を原点とする任意のXYZ座標系空間にマッピングした状態を示した図である。(A) is a diagram showing the spatial positional relationship between the autonomous mobile body 12 in the real world and the pillar 99 that exists as feature information around it, and (B) shows the autonomous mobile body 12 and the pillar 99 with P0 as the origin. It is a diagram showing a state of mapping in an arbitrary XYZ coordinate system space. 実施形態1に係る自律移動体12の機械的な構成例を示す斜視図である。1 is a perspective view showing a mechanical configuration example of an autonomous mobile body 12 according to Embodiment 1. FIG. 制御部10-2、制御部11-2、制御部12-2、制御部13-2、制御部14-3、制御部15-2の具体的なハードウェア構成例を示すブロック図である。3 is a block diagram showing a specific hardware configuration example of a control unit 10-2, a control unit 11-2, a control unit 12-2, a control unit 13-2, a control unit 14-3, and a control unit 15-2; FIG. 実施形態1に係る自律移動体制御システムが実行する処理を説明するシーケンス図である。4 is a sequence diagram illustrating processing executed by the autonomous mobile body control system according to the first embodiment; FIG. 図8の続きのシーケンス図である。FIG. 9 is a sequence diagram continued from FIG. 8; 図9の続きのシーケンス図である。FIG. 10 is a sequence diagram continued from FIG. 9; (A)は地球の緯度/経度情報を示す図であり、(B)は(A)の所定の空間100を示す斜視図である。(A) is a diagram showing latitude/longitude information of the earth, and (B) is a perspective view showing the predetermined space 100 of (A). 空間100内の空間情報を模式的に示した図である。4 is a diagram schematically showing spatial information in space 100. FIG. (A)は経路情報を地図情報で表示した図、(B)は位置点群データを用いた経路情報を地図情報で表示した図、(C)は固有識別子を用いた経路情報を地図情報で表示した図である。(A) is a diagram showing route information using map information, (B) is a diagram showing route information using position point cloud data using map information, and (C) is a map showing route information using unique identifiers. It is the displayed figure. 実施形態2に係るキーフレーム情報の格納処理に関わる機能ブロック図である。FIG. 11 is a functional block diagram related to key frame information storage processing according to the second embodiment; 実施形態2に係るキーフレーム情報の格納処理を説明するシーケンス図である。FIG. 12 is a sequence diagram for explaining key frame information storage processing according to the second embodiment; 実施形態2に係る自律移動体12のフォーマット経路情報900のイメージ図である。FIG. 10 is an image diagram of format route information 900 of an autonomous mobile body 12 according to Embodiment 2; 図16のフォーマット経路情報900を用いて移動する自律移動体12と自律移動体12を制御するシステム制御装置10の動作を説明するシーケンス図である。FIG. 17 is a sequence diagram illustrating the operation of an autonomous mobile body 12 that moves using the format route information 900 of FIG. 16 and the system control device 10 that controls the autonomous mobile body 12;
 以下、図面を参照して本発明の実施形態を説明する。ただし、本発明は以下の実施形態に限定されるものではない。尚、各図において、同一の部材または要素については同一の参照番号を付し、重複する説明は省略または簡略化する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. However, the present invention is not limited to the following embodiments. In each figure, the same members or elements are denoted by the same reference numerals, and overlapping descriptions are omitted or simplified.
 尚、実施形態においては自律移動体の制御に適用した例について説明するが、移動体はユーザが移動体の移動に関して少なくとも一部を操作可能なものであっても良い。即ち、例えばユーザに対して移動経路等に関する各種表示等を行い、その表示を参照してユーザが移動体の運転操作の一部を行う構成であっても良い。 In addition, although an example applied to control of an autonomous mobile body will be described in the embodiment, the mobile body may be one in which the user can operate at least a part of the movement of the mobile body. That is, for example, various displays related to the moving route and the like may be displayed to the user, and the user may perform a part of the driving operation of the moving body with reference to the display.
<実施形態1>
 図1は本発明の実施形態1にかかる自律移動体制御システムの全体構成例を示す図である。図1に示すように、本実施形態の自律移動体制御システム(制御システムと略すこともある。)は、システム制御装置10、ユーザインターフェース11、自律移動体12、経路決定装置13、変換情報保持装置14、センサノード15等を備える。尚、ここで、ユーザインターフェース11はユーザ端末装置を意味する。
<Embodiment 1>
FIG. 1 is a diagram showing an overall configuration example of an autonomous mobile body control system according to Embodiment 1 of the present invention. As shown in FIG. 1, the autonomous mobile body control system (also abbreviated as control system) of the present embodiment includes a system control device 10, a user interface 11, an autonomous mobile body 12, a route determination device 13, conversion information holding It includes a device 14, a sensor node 15, and the like. Here, the user interface 11 means a user terminal device.
 尚、本実施形態では、図1に示される各装置はインターネット16を介して、後述される夫々のネットワーク接続部によって接続されている。しかし、例えば、LAN(Local Area Network)等の他のネットワークシステムを用いてもかまわない。 In this embodiment, each device shown in FIG. 1 is connected via the Internet 16 by respective network connection units, which will be described later. However, other network systems such as LAN (Local Area Network) may be used.
 又、システム制御装置10、ユーザインターフェース11、経路決定装置13、変換情報保持装置14等の一部は同一装置として構成しても構わない。 Also, part of the system control device 10, the user interface 11, the route determining device 13, the conversion information holding device 14, etc. may be configured as the same device.
 システム制御装置10、ユーザインターフェース11、自律移動体12、経路決定装置13、変換情報保持装置14、センサノード15は夫々、コンピュータとしてのCPUや、記憶媒体としてのROM、RAM、HDD等からなる情報処理装置を含んでいる。各装置の機能及び内部構成の詳細については後に説明する。 The system control device 10, the user interface 11, the autonomous mobile body 12, the route determination device 13, the conversion information holding device 14, and the sensor node 15 each contain information such as a CPU as a computer and ROM, RAM, HDD, etc. as storage media. Contains processing equipment. Details of the function and internal configuration of each device will be described later.
 次に、自律移動体制御システムによって提供されるサービスアプリケーションソフトウェア(以下、アプリと略す。)について説明する。尚、説明にあたっては、先ず、ユーザが位置情報を入力する際にユーザインターフェース11に表示される画面イメージを図2(A)、(B)を用いて説明する。 Next, the service application software (hereinafter abbreviated as application) provided by the autonomous mobile control system will be explained. In the explanation, first, screen images displayed on the user interface 11 when the user inputs position information will be explained with reference to FIGS. 2(A) and 2(B).
 続いて、ユーザが自律移動体12の現在位置を閲覧する際のユーザインターフェース11に表示される画面イメージを図3(A)、図3(B)を用いて説明する。また、自律移動体制御システムにおいて、どのようにアプリに対してユーザの操作がなされるのかを例を用いて説明する。 Next, screen images displayed on the user interface 11 when the user browses the current position of the autonomous mobile body 12 will be described with reference to FIGS. 3(A) and 3(B). Also, how the user operates the application in the autonomous mobile body control system will be explained using an example.
 尚、本説明において、便宜上、地図表示は二次元の平面で説明するが、本実施の形態において、ユーザは「高さ」も含めた3次元的な位置指定が可能であり、「高さ」情報を入力することもできる。即ち、本実施形態ではアプリは3次元地図を表示することができる。 In this description, the map display will be described on a two-dimensional plane for the sake of convenience. You can also enter information. That is, in this embodiment, the application can display a 3D map.
 図2(A)はユーザが位置情報を入力する際の入力画面の例を示す図、図2(B)は使用する自律移動体を選択するための選択画面の例を示す図である。ユーザがユーザインターフェース11の表示画面を操作して、インターネット16にアクセスし、自律移動体制御システムの例えば経路設定アプリを選択すると、システム制御装置10のWEBページが表示される。  Fig. 2(A) is a diagram showing an example of an input screen when a user inputs position information, and Fig. 2(B) is a diagram showing an example of a selection screen for selecting an autonomous mobile body to be used. When the user operates the display screen of the user interface 11 to access the Internet 16 and select, for example, a route setting application of the autonomous mobile control system, the WEB page of the system control device 10 is displayed.
 WEBページに先ず表示されるのは、自律移動体12を移動させる際に、出発地、経由地、到着地を設定するための出発地、経由地、到着地の入力画面40である。入力画面40には使用する自律移動体(例えば、自動運転可能なモビリティ)の一覧を表示させるための一覧表示ボタン48がある。ユーザが一覧表示ボタン48を押下すると、図2(B)で示すようにモビリティの一覧表示画面47が表示される。 What is first displayed on the WEB page is an input screen 40 for the departure point, transit point, and arrival point for setting the departure point, transit point, and arrival point when moving the autonomous mobile body 12 . The input screen 40 has a list display button 48 for displaying a list of autonomous mobile bodies to be used (for example, mobilities capable of automatic operation). When the user presses the list display button 48, a mobility list display screen 47 is displayed as shown in FIG. 2(B).
 ユーザは先ず、一覧表示画面47において使用する自律移動体(例えば、自動運転可能なモビリティ)を選択する。一覧表示画面47においては例えばM1~M3のモビリティが選択可能に表示されているが、数はこれに限定されない。 The user first selects the autonomous mobile body to be used (for example, mobility capable of automatic operation) on the list display screen 47 . In the list display screen 47, for example, mobilities M1 to M3 are displayed in a selectable manner, but the number is not limited to this.
 ユーザがM1~M3のいずれかのモビリティをタッチ操作やクリック操作等によって選択すると、自動的に図2(A)の入力画面40に戻る。又、一覧表示ボタン48には、選択されたモビリティ名が表示される。その後ユーザは出発地として設定する場所を「出発地」の入力フィールド41に入力する。 When the user selects one of the mobilities M1 to M3 by touch operation, click operation, etc., the screen automatically returns to the input screen 40 of FIG. 2(A). Also, the selected mobility name is displayed on the list display button 48 . After that, the user inputs the location to be set as the starting point in the input field 41 of "starting point".
 又、ユーザは経由地として設定する場所を「経由地1」の入力フィールド42に入力する。尚、経由地は追加可能となっており、経由地の追加ボタン44を1回押下すると、「経由地2」の入力フィールド46が1つ追加表示され、追加する経由地を入力することができる。 In addition, the user inputs the location to be set as a transit point in the input field 42 of "transit point 1". It is possible to add a waypoint, and when the add waypoint button 44 is pressed once, an input field 46 for "waypoint 2" is additionally displayed, and the waypoint to be added can be input. .
 経由地の追加ボタン44を押下する度に、「経由地3」、「経由地4」のように、入力フィールド46が押下に応じて追加表示され、追加する経由地を複数地点入力することができる。又、ユーザは到着地として設定する場所を「到着地」の入力フィールド43に入力する。尚、図には示していないが、入力フィールド41~43、46等をクリックすると、文字を入力するためのキーボード等が一時的に表示される。 Every time the button 44 for adding a waypoint is pressed, additional input fields 46 are displayed, such as "waypoint 3" and "waypoint 4", and a plurality of additional waypoints can be input. can. Also, the user inputs a place to be set as the arrival point in the input field 43 of "arrival point". Although not shown in the figure, when the input fields 41 to 43, 46, etc. are clicked, a keyboard or the like for inputting characters is temporarily displayed.
 そして、ユーザは決定ボタン45を押下することにより、自律移動体12の移動経路を設定することができる。図2の例では、出発地として”AAA”、経由地1として”BBB”、到着地として”CCC”と設定している。 Then, the user can set the movement route of the autonomous mobile body 12 by pressing the decision button 45 . In the example of FIG. 2, "AAA" is set as the departure point, "BBB" is set as the transit point 1, and "CCC" is set as the arrival point.
 入力フィールドに入力する文言は、例えば住所等であってもよいし、緯度情報および経度情報(以下、緯度/経度情報とも記載する)や店名や電話番号などの、特定の位置を示すための位置情報を入力できるようにしてもよい。 The text to be entered in the input field may be, for example, an address, latitude information and longitude information (hereinafter also referred to as latitude/longitude information), store name, telephone number, etc., to indicate a specific location. You may enable it to input information.
 図3(A)は自律移動体の現在位置を確認するための画面の例を示す図、図3(B)は自律移動体の現在位置を確認する際の地図表示画面の例を示す図である。図3(A)の50は確認画面であり、図2(A)のような画面で自律移動体12の移動経路を設定した後に、不図示の操作ボタンの操作をユーザがすることによって表示される。 FIG. 3A is a diagram showing an example of a screen for confirming the current position of an autonomous mobile body, and FIG. 3B is a diagram showing an example of a map display screen when confirming the current position of an autonomous mobile body. be. Reference numeral 50 in FIG. 3(A) denotes a confirmation screen, which is displayed when the user operates an operation button (not shown) after setting the movement route of the autonomous mobile body 12 on the screen as shown in FIG. 2(A). be.
 確認画面50では、自律移動体12の現在位置が例えば現在地56のように、ユーザインターフェース11のWEBページに表示される。従ってユーザは容易に現在位置を把握できる。 On the confirmation screen 50, the current position of the autonomous mobile body 12 is displayed on the WEB page of the user interface 11, like the current position 56, for example. Therefore, the user can easily grasp the current position.
 又、ユーザは更新ボタン57を押下することにより、画面表示情報を更新して最新状態を表示することができる。又、ユーザは経由地/到着地変更ボタン54を押下することにより、出発地、経由地、または到着地を変更することができる。即ち、ユーザは「出発地」の入力フィールド51、「経由地1」の入力フィールド52、「到着地」の入力フィールド53に夫々再設定したい場所を入力することで変更することができる。 Also, by pressing the update button 57, the user can update the screen display information to display the latest state. Also, the user can change the place of departure, the waypoint, or the place of arrival by pressing the change waypoint/arrival place button 54 . That is, the user can change the location by inputting the location to be reset in the input field 51 of "departure point", the input field 52 of "route point 1", and the input field 53 of "arrival point".
 図3(B)には、図3(A)の地図表示ボタン55を押下した場合に、確認画面50から切り替わる地図表示画面60の例が示されている。地図表示画面60では、現在地62の位置を地図上で表示することによって、自律移動体12の現在地をよりわかりやすく確認する。又、ユーザが戻るボタン61を押下した場合には、図3(A)の確認画面50に表示画面を戻すことができる。 FIG. 3(B) shows an example of a map display screen 60 that switches from the confirmation screen 50 when the map display button 55 of FIG. 3(A) is pressed. On the map display screen 60, the current location of the autonomous mobile body 12 can be confirmed more easily by displaying the current location 62 on the map. Also, when the user presses the return button 61, the display screen can be returned to the confirmation screen 50 of FIG. 3(A).
 以上のように、ユーザはユーザインターフェース11の操作により、自律移動体12を所定の場所から所定の場所まで移動するための移動経路を容易に設定できる。尚、このような経路設定アプリは、例えばタクシーの配車サービスや、ドローンの宅配サービスなどにも適用することができる。 As described above, by operating the user interface 11, the user can easily set a movement route for moving the autonomous mobile body 12 from a predetermined location to a predetermined location. Note that such a route setting application can also be applied to, for example, a taxi dispatch service, a drone home delivery service, and the like.
 次に図1における10~15の構成例と機能例に関して図4を用いて詳細に説明する。図4は、図1の10~15の内部構成例を示した機能ブロック図である。尚、図4に示される機能ブロックの一部は、各装置に含まれる不図示のコンピュータに、不図示の記憶媒体としてのメモリに記憶されたコンピュータプログラムを実行させることによって実現されている。 Next, the configuration example and function example of 10 to 15 in FIG. 1 will be explained in detail using FIG. FIG. 4 is a functional block diagram showing an internal configuration example of 10 to 15 in FIG. Some of the functional blocks shown in FIG. 4 are realized by causing a computer (not shown) included in each device to execute a computer program stored in a memory (not shown) as a storage medium.
 しかし、それらの一部又は全部をハードウェアで実現するようにしても構わない。ハードウェアとしては、専用回路(ASIC)やプロセッサ(リコンフィギュラブルプロセッサ、DSP)などを用いることができる。 However, some or all of them may be realized by hardware. As hardware, a dedicated circuit (ASIC), a processor (reconfigurable processor, DSP), or the like can be used.
 又、図4に示される夫々の機能ブロックは、同じ筐体に内蔵されていなくても良く、互いに信号路を介して接続された別々の装置により構成しても良い。 Also, each functional block shown in FIG. 4 may not be built in the same housing, and may be configured by separate devices connected to each other via signal paths.
 図4において、ユーザインターフェース11は操作部11-1、制御部11-2、表示部11-3、情報記憶部(メモリ/HDD)11-4、ネットワーク接続部11-5を備える。ユーザインターフェース11は、例えば、スマートフォン、タブレット端末、またはスマートウォッチ等の情報処理装置である。 4, the user interface 11 includes an operation unit 11-1, a control unit 11-2, a display unit 11-3, an information storage unit (memory/HDD) 11-4, and a network connection unit 11-5. The user interface 11 is, for example, an information processing device such as a smart phone, a tablet terminal, or a smart watch.
 操作部11-1は、タッチパネルやキーボタンなどで構成されており、データの入力のために用いられる。表示部11-3は例えば液晶画面などであり、経路情報やその他のデータを表示するために用いられる。 The operation unit 11-1 is composed of a touch panel, key buttons, etc., and is used for data input. The display unit 11-3 is, for example, a liquid crystal screen, and is used to display route information and other data.
 図2、図3において示したユーザインターフェース11の表示画面は表示部11-3に表示される。ユーザは表示部11-3に表示されたメニューを用いて、移動経路の選択、情報の入力、情報の確認等を行うことができる。 The display screen of the user interface 11 shown in FIGS. 2 and 3 is displayed on the display unit 11-3. Using the menu displayed on the display unit 11-3, the user can select a moving route, input information, confirm information, and the like.
 つまり操作部11-1及び表示部11-3はユーザが実際に操作をするための操作用のインターフェースを提供している。尚、操作部11-1と表示部11-3を別々に設ける代わりに、タッチパネルによって操作部と表示部を兼用しても良い。 In other words, the operation unit 11-1 and the display unit 11-3 provide an operation interface for the user to actually operate. Instead of separately providing the operation section 11-1 and the display section 11-3, a touch panel may be used as both the operation section and the display section.
 制御部11-2は、コンピュータとしてのCPUを内蔵し、ユーザインターフェース11における各種アプリの管理や、情報入力、情報確認などのモード管理を行い、通信処理を制御する。又、システム制御装置内の各部における処理を制御する。 The control unit 11-2 incorporates a CPU as a computer, manages various applications in the user interface 11, manages modes such as information input and information confirmation, and controls communication processing. Also, it controls the processing in each part in the system controller.
 情報記憶部(メモリ/HDD)11-4は、例えばCPUが実行するためのコンピュータプログラム等の、必要な情報を保有しておくための記録媒体である。ネットワーク接続部11-5は、インターネットやLAN、無線LANなどを介して行われる通信を制御する。 The information storage unit (memory/HDD) 11-4 is a recording medium for holding necessary information such as computer programs to be executed by the CPU. A network connection unit 11-5 controls communication performed via the Internet, LAN, wireless LAN, or the like.
 このように、本実施形態のユーザインターフェース11は、システム制御装置10のブラウザ画面に出発地、経由地、および到着地を入力画面40を表示する。また、ユーザによる出発地点、経由地点、到着地点といった位置情報の入力受付が可能である。ユーザインターフェース11は、ブラウザ画面に確認画面50及び地図表示画面60を表示することで、自律移動体12の現在位置を表示することができる。 Thus, the user interface 11 of the present embodiment displays the departure point, waypoint, and arrival point input screen 40 on the browser screen of the system control device 10 . In addition, it is possible to accept input of positional information such as a departure point, a waypoint, and an arrival point by the user. The user interface 11 can display the current position of the autonomous mobile body 12 by displaying the confirmation screen 50 and the map display screen 60 on the browser screen.
 図4における、経路決定装置13は、地図情報管理部13-1、制御部13-2、位置/経路情報管理部13-3、情報記憶部(メモリ/HDD)13-4、ネットワーク接続部13-5を備える。地図情報管理部13-1は、広域の地図情報を保有しており、指定された所定の位置情報に基づいて地図上のルートを示す経路情報を探索するとともに、探索結果の経路情報を位置/経路情報管理部13-3に送信する。 4, the route determination device 13 includes a map information management unit 13-1, a control unit 13-2, a position/route information management unit 13-3, an information storage unit (memory/HDD) 13-4, and a network connection unit 13. -5. The map information management unit 13-1 holds wide-area map information, searches for route information indicating a route on the map based on designated predetermined position information, and uses the route information of the search result as a position/ It is transmitted to the route information management section 13-3.
 本実施形態では、地図情報は地形や緯度/経度/高度といった情報を含む3次元空間の地図情報である。また地図情報は、車道、歩道、進行方向、および交通規制といった道路交通法に関わる規制情報なども含む。 In this embodiment, the map information is three-dimensional spatial map information including information such as topography and latitude/longitude/altitude. The map information also includes roadways, sidewalks, direction of travel, and regulation information related to road traffic laws such as traffic regulations.
 又、例えば地図情報は、時間帯によって一方通行となる場合や、時間帯によって歩行者専用道路となるものなど、時間によって変化する交通規制情報も、それぞれの時間情報とともに含む。制御部13-2は、コンピュータとしてのCPUを内蔵し、経路決定装置13内の各部における処理を制御する。 For example, the map information also includes time-varying traffic regulation information, such as one-way streets depending on the time of day and pedestrian-only roads depending on the time of day. The control unit 13-2 incorporates a CPU as a computer, and controls processing in each unit within the route determination device 13. FIG.
 位置/経路情報管理部13-3は、ネットワーク接続部13-5を介して取得した自律移動体の位置情報を管理するとともに、地図情報管理部13-1に位置情報送信し、地図情報管理部13-1から取得した探索結果としての経路情報を管理する。制御部13-2は、外部システムの要求に従って、位置/経路情報管理部13-3で管理されている経路情報を所定のデータ形式に変換するとともに、外部システムに送信する。 The position/route information management unit 13-3 manages the position information of the autonomous mobile body acquired via the network connection unit 13-5, and transmits the position information to the map information management unit 13-1. Manage the route information as the search result obtained from 13-1. The control unit 13-2 converts the route information managed by the position/route information management unit 13-3 into a predetermined data format according to a request from the external system, and transmits the converted data to the external system.
 以上のように、本実施形態においては、経路決定装置13は、指定された位置情報に基づいて道路交通法等に則した経路を探索し、経路情報を所定のデータ形式で出力できるように構成されている。 As described above, in the present embodiment, the route determination device 13 is configured to search for a route in compliance with the Road Traffic Law or the like based on designated position information, and to output the route information in a predetermined data format. It is
 図4における、変換情報保持装置14は、位置/経路情報管理部14-1、固有識別子管理部14-2、制御部14-3、フォーマットデータベース14-4、情報記憶部(メモリ/HDD)14-5、ネットワーク接続部14-6を備える。 The conversion information holding device 14 in FIG. -5 and a network connection unit 14-6.
 又、変換情報保持装置14は、緯度/経度/高さによって定義される3次元の空間に固有識別子を付与し、その空間に存在する物体の状態と時間に関する空間情報を上記固有識別子と関連付けてフォーマット化して保存するフォーマット化手段として機能できる。 Further, the conversion information holding device 14 assigns a unique identifier to a three-dimensional space defined by latitude/longitude/height, and associates spatial information about the state and time of objects existing in the space with the unique identifier. It can function as a formatting means to format and save.
 位置/経路情報管理部14-1は、ネットワーク接続部14-6を通して取得した所定の位置情報を管理するとともに、制御部14-3の要求に従って位置情報を制御部14-3に送信する。制御部14-3は、コンピュータとしてのCPUを内蔵し、変換情報保持装置14内の各部における処理を制御する。 The position/route information management unit 14-1 manages predetermined position information acquired through the network connection unit 14-6, and transmits the position information to the control unit 14-3 in accordance with the request of the control unit 14-3. The control unit 14-3 incorporates a CPU as a computer, and controls processing in each unit within the conversion information holding device 14. FIG.
 制御部14-3は、位置/経路情報管理部14-1から取得した位置情報と、フォーマットデータベース14-4で管理されているフォーマットの情報に基づいて、位置情報をフォーマットで規定された固有識別子に変換する。そして、固有識別子管理部14-2に送信する。 Based on the position information acquired from the position/route information management unit 14-1 and the format information managed by the format database 14-4, the control unit 14-3 converts the position information into a unique identifier defined by the format. Convert to Then, it is transmitted to the unique identifier management section 14-2.
 フォーマットについては後に詳しく説明するが、所定の位置を起点とした空間に識別子(以下、固有識別子)を割り振り、固有識別子によって空間を管理するものである。本実施形態においては、所定の位置情報を基に、対応する固有識別子や空間内の情報を取得することができる。  The format will be explained in detail later, but an identifier (hereinafter referred to as a unique identifier) is assigned to the space starting from a predetermined position, and the space is managed by the unique identifier. In this embodiment, it is possible to acquire the corresponding unique identifier and information in the space based on the predetermined position information.
 固有識別子管理部14-2は、制御部14-3にて変換した固有識別子を管理するとともにネットワーク接続部14-6を通じて送信する。フォーマットデータベース14-4は、フォーマットの情報を管理するとともに、制御部14-3の要求に従って、フォーマットの情報を制御部14-3に送信する。 The unique identifier management unit 14-2 manages the unique identifier converted by the control unit 14-3 and transmits it through the network connection unit 14-6. The format database 14-4 manages format information and transmits the format information to the control unit 14-3 in accordance with a request from the control unit 14-3.
 又、ネットワーク接続部14-6を通じて取得した空間内の情報をフォーマットを用いて管理する。変換情報保持装置14は、外部の機器、装置、ネットワークにより取得された空間に関する情報を、固有識別子と紐づけて管理する。又、外部の機器、装置、ネットワークに対して固有識別子及びそれに紐づく空間に関する情報を提供する。 Also, the information in the space acquired through the network connection unit 14-6 is managed using a format. The conversion information holding device 14 manages information related to space acquired by external devices, devices, and networks in association with unique identifiers. In addition, it provides information on unique identifiers and associated spaces to external devices, devices, and networks.
 以上のように、変換情報保持装置14は、所定の位置情報を基に、固有識別子と空間内の情報を取得し、その情報を自身に接続された外部の機器、装置、ネットワークが共有できる状態に管理、提供する。又、変換情報保持装置14は、システム制御装置10に指定された位置情報を、固有識別子に変換し、システム制御装置10に提供する。 As described above, the conversion information holding device 14 acquires the unique identifier and the information in the space based on the predetermined position information, and can share the information with external devices, devices, and networks connected to itself. managed and provided to Further, the conversion information holding device 14 converts the position information specified by the system control device 10 into a unique identifier and provides the system control device 10 with the unique identifier.
 図4において、システム制御装置10は固有識別子管理部10-1、制御部10-2、位置/経路情報管理部10-3、情報記憶部(メモリ/HDD)10-4、ネットワーク接続部10-5を備える。位置/経路情報管理部10-3は、地形情報と緯度/経度情報の対応付けをした簡易的な地図情報を保持するとともに、ネットワーク接続部10-5を通して取得した所定の位置情報及び経路情報を管理する。 4, the system control device 10 includes a unique identifier management section 10-1, a control section 10-2, a position/path information management section 10-3, an information storage section (memory/HDD) 10-4, and a network connection section 10-. 5. The position/route information management unit 10-3 holds simple map information that associates terrain information with latitude/longitude information, and stores predetermined position information and route information obtained through the network connection unit 10-5. to manage.
 また位置/経路情報管理部10-3は、経路情報を所定の間隔で区切るとともに、区切った場所の緯度/経度といった位置情報を生成することもできる。固有識別子管理部10-1は、位置情報及び経路情報を固有識別子に変換した情報を管理する。 The position/route information management unit 10-3 can also divide the route information at predetermined intervals and generate position information such as the latitude/longitude of the divided locations. The unique identifier management unit 10-1 manages information obtained by converting position information and route information into unique identifiers.
 制御部10-2は、コンピュータとしてのCPUを有し、システム制御装置10の位置情報、経路情報、固有識別子の通信機能の制御を司り、システム制御装置10の各構成要素における処理を制御する。 The control unit 10-2 has a CPU as a computer, controls the position information, route information, and unique identifier communication functions of the system control device 10, and controls processing in each component of the system control device 10.
 又、制御部10-2は、ユーザインターフェース11にWEBページを提供するとともに、WEBページから取得した所定の位置情報を、経路決定装置13に送信する。又、経路決定装置13から所定の経路情報を取得し、経路情報の各位置情報を変換情報保持装置14に送信する。そして、変換情報保持装置14から取得した固有識別子に変換された経路情報を自律移動体12に送信する。 In addition, the control unit 10 - 2 provides the user interface 11 with the WEB page and transmits predetermined position information acquired from the WEB page to the route determination device 13 . Further, it acquires predetermined route information from the route determination device 13 and transmits each position information of the route information to the conversion information holding device 14 . Then, the route information converted into the unique identifier acquired from the conversion information holding device 14 is transmitted to the autonomous mobile body 12 .
 以上のように、システム制御装置10はユーザの指定する所定の位置情報の取得、位置情報及び経路情報の送受信、位置情報の生成、固有識別子を用いた経路情報の送受信を行えるように構成されている。 As described above, the system control device 10 is configured to acquire predetermined position information designated by the user, transmit and receive position information and route information, generate position information, and transmit and receive route information using unique identifiers. there is
 又、システム制御装置10は、ユーザインターフェース11に入力された位置情報に基づいて、自律移動体12が自律移動を行うのに必要な経路情報を収集するとともに、自律移動体12に固有識別子を用いた経路情報を提供する。尚、本実施形態では、システム制御装置10と経路決定装置13、変換情報保持装置14は例えばサーバーとして機能している。 In addition, based on the position information input to the user interface 11, the system control device 10 collects route information necessary for the autonomous mobile body 12 to move autonomously, and uses a unique identifier for the autonomous mobile body 12. provide route information. In this embodiment, the system control device 10, the route determination device 13, and the conversion information holding device 14 function as servers, for example.
 図4において、自律移動体12は検出部12-1、制御部12-2、方向制御部12-3、情報記憶部(メモリ/HDD)12-4、ネットワーク接続部12-5、駆動部12-6を備える。検出部12-1は、例えば複数の撮像素子を有し、複数の撮像素子から得られた複数の撮像信号の位相差に基づき測距を行う機能を有する。 In FIG. 4, the autonomous moving body 12 includes a detection unit 12-1, a control unit 12-2, a direction control unit 12-3, an information storage unit (memory/HDD) 12-4, a network connection unit 12-5, and a drive unit 12. -6. The detection unit 12-1 has, for example, a plurality of imaging elements, and has a function of performing distance measurement based on phase differences between a plurality of imaging signals obtained from the plurality of imaging elements.
 又、周辺の地形・建物の壁などの障害物といった検出情報(以下、検出情報)を取得し、検出情報と地図情報に基づき自己位置を推定する自己位置推定機能を有する。 In addition, it has a self-position estimation function that acquires detection information (hereinafter referred to as detection information) such as obstacles such as surrounding terrain and building walls, and estimates its own position based on the detection information and map information.
 又、検出部12-1は、GPS(Global Positioning System)などの自己位置検出機能と、例えば地磁気センサなどの方向検出機能を有する。更に、取得した検出情報と自己位置推定情報と方向検出情報を基に、制御部12-2はサイバー空間の3次元マップを生成することができる。 The detection unit 12-1 also has a self-position detection function such as GPS (Global Positioning System) and a direction detection function such as a geomagnetic sensor. Furthermore, based on the acquired detection information, self-position estimation information, and direction detection information, the control unit 12-2 can generate a three-dimensional map of cyberspace.
 ここで、サイバー空間の3次元マップとは、現実世界の地物位置と等価な空間情報を、デジタルデータとして表現可能なものである。このサイバー空間の3次元マップ内には、現実世界に存在する自律移動体12や、その周辺の地物情報が、デジタルデータとして空間的に等価な情報として保持されている。従って、このデジタルデータを用いることで、効率的な移動が可能である。 Here, a 3D map of cyberspace is one that can express spatial information equivalent to the position of features in the real world as digital data. In this three-dimensional map of cyberspace, the autonomous mobile body 12 that exists in the real world and information on features around it are held as spatially equivalent information as digital data. Therefore, by using this digital data, efficient movement is possible.
 以下図5を例として、本実施形態で用いるサイバー空間の3次元マップについて説明する。図5(A)は、現実世界における自律移動体12とその周辺の地物情報として存在する柱99の空間的位置関係を示した図、図5(B)は自律移動体12と柱99を、位置P0を原点とする任意のXYZ座標系空間にマッピングした状態を示した図である。 The three-dimensional map of cyberspace used in this embodiment will be described below using FIG. 5 as an example. FIG. 5A is a diagram showing the spatial positional relationship between the autonomous mobile body 12 in the real world and a pillar 99 that exists as feature information around it. FIG. 5B shows the autonomous mobile body 12 and the pillar 99. , is a diagram showing a state of mapping in an arbitrary XYZ coordinate system space with the position P0 as the origin.
 図5(A)、(B)において、自律移動体12の位置は、自律移動体12に搭載された不図示のGPS等によって取得された緯度経度の位置情報から、自律移動体12内の位置α0として特定される。又、自律移動体12の方位は不図示の電子コンパス等によって取得された方位αYと自律移動体12に移動方向12Yの差分によって特定される。 In FIGS. 5A and 5B, the position of the autonomous mobile body 12 is determined from the latitude and longitude position information acquired by GPS or the like (not shown) mounted on the autonomous mobile body 12. identified as α0. Also, the orientation of the autonomous mobile body 12 is specified by the difference between the orientation αY acquired by an electronic compass (not shown) or the like and the moving direction 12Y of the autonomous mobile body 12 .
 又、柱99の位置は、予め測定された位置情報から頂点99-1の位置として特定される。また自律移動体12の測距機能によって、自律移動体12のα0から頂点99-1までの距離を取得することが可能である。図5(A)においては移動方向12YをXYZ座標系の軸としてα0を原点とした場合に、頂点99-1の座標(Wx,Wy,Wz)として示される。 Also, the position of the pillar 99 is specified as the position of the vertex 99-1 from position information measured in advance. Also, the distance measurement function of the autonomous mobile body 12 makes it possible to acquire the distance from α0 of the autonomous mobile body 12 to the vertex 99-1. In FIG. 5A, when the moving direction 12Y is the axis of the XYZ coordinate system and α0 is the origin, the coordinates (Wx, Wy, Wz) of the vertex 99-1 are shown.
 サイバー空間の3次元マップでは、この様に取得された情報がデジタルデータとして管理され、図5(B)のような空間情報としてシステム制御装置10、経路決定装置13等で再構成することが可能である。図5(B)においては、自律移動体12と柱99を、P0を原点とする任意のXYZ座標系空間にマッピングした状態を示している。 In the three-dimensional map of cyberspace, the information obtained in this way is managed as digital data, and can be reconstructed as spatial information as shown in FIG. is. FIG. 5B shows a state in which the autonomous mobile body 12 and the pillar 99 are mapped in an arbitrary XYZ coordinate system space with P0 as the origin.
 P0を現実世界の所定の緯度経度に設定し、現実世界の方位北をY軸方向に取ることで、この任意のXYZ座標系空間で自律移動体12を、P1と柱99をP2として表現することができる。 By setting P0 to a predetermined latitude and longitude in the real world and taking the azimuth north of the real world in the Y-axis direction, the autonomous mobile body 12 is expressed as P1 and the pillar 99 as P2 in this arbitrary XYZ coordinate system space. be able to.
 具体的には、α0の緯度経度とP0の緯度経度から、この空間におけるα0の位置P1を算出できる。又、同様に柱99をP2として算出できる。この例では、自律移動体12と柱99の2つをサイバー空間の3次元マップで表現しているが、勿論もっと多数あっても同様に扱うことが可能である。以上のように、3次元空間に現実世界の自己位置や物体をマッピングしたものが3次元マップである。 Specifically, the position P1 of α0 in this space can be calculated from the latitude and longitude of α0 and the latitude and longitude of P0. Similarly, the column 99 can be calculated as P2. In this example, two of the autonomous mobile body 12 and the pillar 99 are represented by a three-dimensional map of cyber space, but of course, even if there are more, it is possible to treat them in the same way. As described above, a three-dimensional map is a mapping of the self-position and objects in the real world in a three-dimensional space.
 図4に戻り、自律移動体12は、機械学習を行った物体検出の学習結果データを、例えば情報記憶部(メモリ/HDD)12-4に記憶しており、機械学習を用いて撮影画像から物体検出することができる。尚、検出情報に関しては、ネットワーク接続部12-5を経由して、外部のシステムから取得して、3次元マップに反映することもできる。 Returning to FIG. 4, the autonomous mobile body 12 stores learning result data of object detection that has been machine-learned, for example, in an information storage unit (memory/HDD) 12-4. Objects can be detected. The detection information can also be acquired from an external system via the network connection unit 12-5 and reflected on the three-dimensional map.
 尚、制御部12-2は、コンピュータとしてのCPUを有し、自律移動体12の移動、方向転換、自律走行機能の制御を司り、自律移動体12の各構成要素における処理を制御する。 The control unit 12-2 has a CPU as a computer, controls movement, direction change, and autonomous running functions of the autonomous mobile body 12, and controls processing in each component of the autonomous mobile body 12.
 方向制御部12-3は、駆動部12-6の駆動方向を変更することで、自律移動体12の移動方向の変更を行う。駆動部12-6は、モータなどの駆動装置からなり、自律移動体12の推進力を発生させる。自律移動体12は3次元マップ内に自己位置及び検出情報、物体検出情報を反映し、周辺の地形・建物・障害物・物体から一定の間隔を保った経路を生成し、自律走行を行うことができる。 The direction control unit 12-3 changes the moving direction of the autonomous mobile body 12 by changing the driving direction of the driving unit 12-6. The driving unit 12-6 is composed of a driving device such as a motor, and generates a propulsion force for the autonomous mobile body 12. FIG. The autonomous mobile body 12 reflects its own position, detection information, and object detection information in a three-dimensional map, generates a route that maintains a certain distance from the surrounding terrain, buildings, obstacles, and objects, and performs autonomous driving. can be done.
 尚、経路決定装置13は例えば道路交通法に関わる規制情報を考慮した経路生成を行う。一方、自律移動体12は経路決定装置13による経路において、周辺障害物の位置をより正確に検出し、自分のサイズに基づき、それらに接触せずに移動するための経路生成を行う。 It should be noted that the route determination device 13 generates routes in consideration of, for example, regulatory information related to the Road Traffic Act. On the other hand, the autonomous mobile body 12 more accurately detects the positions of surrounding obstacles on the route determined by the route determination device 13, and generates a route based on its own size so as to move without touching them.
 又、自律移動体12の情報記憶部(メモリ/HDD)12-4には自律移動体自身のモビリティ形式を格納することも出来る。このモビリティ形式とは移動体の種別等であり、例えば自動車、自転車、ドローンなどの種別を意味する。このモビリティ形式に基づいて、後述するフォーマット経路情報の生成を行うことが出来る。 Also, the information storage unit (memory/HDD) 12-4 of the autonomous mobile body 12 can store the mobility type of the autonomous mobile body itself. The type of mobility is the type of mobile object, such as automobiles, bicycles, and drones. Formatted route information, which will be described later, can be generated based on this mobility format.
 ここで本実施形態における自律移動体12の本体構成例について図6を用いて説明する。図6は実施形態1に係る自律移動体12の機械的な構成例を示す斜視図である。尚、本実施形態においては、自律移動体12は、車輪を有する走行体の例を説明するがこの限りではなく、ドローンなどの飛行体であっても良い。 Here, an example of the main body configuration of the autonomous mobile body 12 in this embodiment will be described using FIG. FIG. 6 is a perspective view showing a mechanical configuration example of the autonomous mobile body 12 according to the first embodiment. In this embodiment, the autonomous mobile body 12 will be described as an example of a traveling body having wheels, but is not limited to this, and may be a flying body such as a drone.
 図6において、自律移動体12には検出部12-1、制御部12-2、方向制御部12-3、情報記憶部(メモリ/HDD)12-4、ネットワーク接続部12-5、駆動部12-6が搭載されており、各構成要素は互いに電気的に接続されている。駆動部12-6、方向制御部12-3は自律移動体12に少なくとも2つ以上配備されている。 In FIG. 6, the autonomous moving body 12 includes a detection unit 12-1, a control unit 12-2, a direction control unit 12-3, an information storage unit (memory/HDD) 12-4, a network connection unit 12-5, a drive unit 12-6 are mounted and each component is electrically connected to each other. At least two drive units 12-6 and direction control units 12-3 are provided in the autonomous mobile body 12. FIG.
 方向制御部12-3は軸の回転駆動により駆動部12-6の方向を変更することで、自律移動体12の移動方向を変更し、駆動部12-6は、軸の回転により自律移動体12の前進、後退を行う。尚、図6を用いて説明した構成は1例であって、これに限定するものではなく、例えば移動方向の変更をオムニホイール等を用いて行っても良い。 The direction control unit 12-3 changes the moving direction of the autonomous mobile body 12 by changing the direction of the driving unit 12-6 by rotating the shaft, and the driving unit 12-6 rotates the autonomous mobile body by rotating the shaft. Perform 12 forwards and backwards. The configuration described with reference to FIG. 6 is an example, and the present invention is not limited to this. For example, an omniwheel or the like may be used to change the movement direction.
 尚、自律移動体12は例えばSLAM(Simultaneous Localization and Mapping)技術を用いた移動体である。又、検出部12-1等により検出した検出情報や、インターネット16を介して取得した外部システムの検出情報を基に、指定された所定の経路を自律移動できるように構成されている。 The autonomous mobile body 12 is, for example, a mobile body using SLAM (Simultaneous Localization and Mapping) technology. Further, based on the detection information detected by the detection unit 12-1 or the like and the detection information of the external system acquired via the Internet 16, it is configured so that it can autonomously move along a designated predetermined route.
 自律移動体12は細かく指定された地点をトレースするようなトレース移動も可能であるし、大まかに設定された地点を通過しながらその間の空間においては自身で経路情報を生成し、移動することも可能である。以上のように、本実施形態の自律移動体12は、システム制御装置10により提供された固有識別子を用いた経路情報に基づき自律移動を行うことができる。 The autonomous mobile body 12 can perform trace movement by tracing finely specified points, and can also generate route information by itself in the space between them while passing through roughly set points and move. It is possible. As described above, the autonomous mobile body 12 of this embodiment can autonomously move based on the route information using the unique identifier provided by the system control device 10 .
 図4に戻り、センサノード15は、例えばロードサイドカメラユニットのような映像監視システムなどの外部システムであり、検出部15-1、制御部15-2、情報記憶部(メモリ/HDD)15-3、ネットワーク接続部15-4を備える。検出部15-1は、例えばカメラ等で構成される撮影部であり、自身が検出可能なエリアの検出情報を取得するとともに、物体検出機能、測距機能を有する。 Returning to FIG. 4, the sensor node 15 is an external system such as a video surveillance system such as a roadside camera unit. , and a network connection unit 15-4. The detection unit 15-1 is an imaging unit composed of, for example, a camera, and acquires detection information of an area in which the detection unit 15-1 can detect itself, and has an object detection function and a distance measurement function.
 制御部15-2は、コンピュータとしてのCPUを内蔵し、センサノード15の検出、データ保管、データ送信機能の制御を司り、センサノード15内の各部における処理を制御する。又、検出部15-1で取得した検出情報を情報記憶部(メモリ/HDD)15-3に保管するとともに、ネットワーク接続部15-4を通じて変換情報保持装置14に送信する。 The control unit 15-2 incorporates a CPU as a computer, controls the detection of the sensor node 15, data storage, and data transmission functions, and controls processing in each unit within the sensor node 15. Further, the detection information acquired by the detection unit 15-1 is stored in the information storage unit (memory/HDD) 15-3 and transmitted to the conversion information holding device 14 through the network connection unit 15-4.
 以上のように、センサノード15は、検出部15-1で検出した画像情報、検出した物体の特徴点情報、位置情報などの検出情報を情報記憶部15-3に保存及び通信できるように構成されている。又、センサノード15は、自身が検出可能なエリアの検出情報を、変換情報保持装置14に提供する。 As described above, the sensor node 15 is configured so that detection information such as image information detected by the detection unit 15-1, feature point information of a detected object, and position information can be stored in the information storage unit 15-3 and communicated. It is Also, the sensor node 15 provides the conversion information holding device 14 with detection information of the area detectable by itself.
 次に、図4における各制御部の具体的なハードウェア構成に関して説明する。図7は、制御部10-2、制御部11-2、制御部12-2、制御部13-2、制御部14-3、制御部15-2の具体的なハードウェア構成例を示すブロック図である。尚、図7に示すハードウェア構成に限定されない。又、図7に示す各ブロックを全て備えている必要はない。 Next, the specific hardware configuration of each control unit in FIG. 4 will be described. FIG. 7 is a block diagram showing a specific hardware configuration example of the control unit 10-2, the control unit 11-2, the control unit 12-2, the control unit 13-2, the control unit 14-3, and the control unit 15-2. It is a diagram. Note that the hardware configuration is not limited to that shown in FIG. Moreover, it is not necessary to have all the blocks shown in FIG.
 図7において、21は情報処理装置の演算・制御を司るコンピュータとしてのCPUである。RAM22は、CPU21の主メモリとして、及び実行プログラムの領域や該プログラムの実行エリアならびにデータエリアとして機能する記録媒体である。ROM23はCPU21の動作処理手順(プログラム)を記録している記録媒体である。 In FIG. 7, 21 is a CPU as a computer that manages the calculation and control of the information processing device. The RAM 22 is a recording medium that functions as a main memory of the CPU 21, an execution program area, an execution area for the program, and a data area. The ROM 23 is a recording medium in which an operation processing procedure (program) of the CPU 21 is recorded.
 ROM23は情報処理装置の機器制御を行うシステムプログラムである基本ソフト(OS)を記録したプログラムROMと、システムを稼働するために必要な情報等が記録されているデータROMとを備える。尚、ROM23の代わりに、後述のHDD29を用いても良い。 The ROM 23 includes a program ROM that records basic software (OS), which is a system program for controlling the information processing device, and a data ROM that records information necessary for operating the system. Note that an HDD 29, which will be described later, may be used instead of the ROM 23. FIG.
 ネットワークI/F24はネットワークインターフェース(NETIF)であり、インターネット16を介して情報処理装置間のデータ転送を行うための制御や接続状況の診断を行う。25はビデオRAM(VRAM)であり、LCD26の画面に表示させるための画像を展開し、その表示の制御を行う。LCD26はディスプレイ等の表示装置(以下、LCDと記す)である。 The network I/F 24 is a network interface (NETIF) that controls data transfer between information processing devices via the Internet 16 and diagnoses the connection status. A video RAM (VRAM) 25 develops an image to be displayed on the screen of the LCD 26 and controls the display. The LCD 26 is a display device such as a display (hereinafter referred to as LCD).
 コントローラ27は外部入力装置28からの入力信号を制御するためのコントローラ(以下、KBCと記す)である。外部入力装置28は利用者が行う操作を受け付けるための外部入力装置(以下、KBと記す)であり、例えばキーボードやマウス等のポインティングデバイスが用いられる。 The controller 27 is a controller (hereinafter referred to as KBC) for controlling input signals from the external input device 28 . The external input device 28 is an external input device (hereinafter referred to as KB) for receiving operations performed by the user, and a pointing device such as a keyboard or mouse is used, for example.
 HDD29はハードディスクドライブ(以下、HDDと記す)であり、アプリケーションプログラムや各種データ保存用に用いられる。本実施形態におけるアプリケーションプログラムとは、本実施形態における各種処理機能を実行するソフトウェアプログラム等である。 The HDD 29 is a hard disk drive (hereinafter referred to as HDD) and is used for storing application programs and various data. The application program in this embodiment is a software program or the like that executes various processing functions in this embodiment.
 CDD30は外部入出力装置(以下、CDDと記す)である。例えばCDROMドライブ、DVDドライブ、Blu-Ray(登録商標)ディスクドライブ等の、取り外し可能なデータ記録媒体としてのリムーバブル・メディア31とデータを入出力するためのものである。 The CDD 30 is an external input/output device (hereinafter referred to as CDD). For example, it is for inputting/outputting data from/to a removable medium 31 as a removable data recording medium such as a CDROM drive, a DVD drive, a Blu-Ray (registered trademark) disk drive, and the like.
 CDD30は、上述したアプリケーションプログラムをリムーバブル・メディアから読み出す場合等に用いられる。31はCDD30によって読み出しされる、例えば、CDROMディスク、DVD、Blu―Rayディスク等のリムーバブル・メディアである。 The CDD 30 is used, for example, when reading the above application program from removable media. 31 is a removable medium such as a CDROM disk, DVD, Blu-Ray disk, etc., which is read by the CDD 30 .
 尚、リムーバブル・メディアは、光磁気記録媒体(例えば、MO)、半導体記録媒体(例えば、メモリカード)等であっても良い。尚、HDD29に格納するアプリケーションプログラムやデータをリムーバブル・メディア31に格納して利用することも可能である。20は上述した各ユニット間を接続するための伝送バス(アドレスバス、データバス、入出力バス、及び制御バス)である。 The removable medium may be a magneto-optical recording medium (eg, MO), a semiconductor recording medium (eg, memory card), or the like. It is also possible to store the application programs and data stored in the HDD 29 in the removable medium 31 and use them. Reference numeral 20 denotes a transmission bus (address bus, data bus, input/output bus, and control bus) for connecting the units described above.
 次に、図2、図3で説明したアプリを実現するための自律移動体制御システムにおける制御動作の詳細について図8~図10を用いて説明する。図8は実施形態1に係る自律移動体制御システムが実行する処理を説明するシーケンス図であり、図9は、図8の続きのシーケンス図であり、図10は、図9の続きのシーケンス図である。  Next, the details of the control operation in the autonomous mobile body control system for realizing the applications described in FIGS. 2 and 3 will be described using FIGS. 8 to 10. FIG. FIG. 8 is a sequence diagram illustrating processing executed by the autonomous mobile body control system according to the first embodiment, FIG. 9 is a sequence diagram following FIG. 8, and FIG. 10 is a sequence diagram following FIG. is.
 図8~図10は、ユーザがユーザインターフェース11に位置情報を入力してから自律移動体12の現在位置情報を受け取るまでの、各装置が実行する処理を示している。尚、10~15の制御部内のコンピュータがメモリに記憶されたコンピュータプログラムを実行することによって図8~図10のシーケンスの各ステップの動作が行われる。 8 to 10 show the processing executed by each device from when the user inputs position information to the user interface 11 until the current position information of the autonomous mobile body 12 is received. 8 to 10 are executed by the computers in the control units 10 to 15 executing the computer programs stored in the memory.
 先ず、ステップS201において、ユーザが、ユーザインターフェース11を用いて、システム制御装置10が提供するWEBページにアクセスする。ステップS202において、システム制御装置10はWEBページの表示画面に図2で説明したような位置入力画面を表示させる。ステップS203において、図2で説明したように、ユーザは自律移動体(モビリティ)を選択し、出発地点、経由地、および到着地点を示す位置情報(以下、位置情報)を入力する。 First, in step S201, the user uses the user interface 11 to access the WEB page provided by the system control device 10. In step S202, the system control device 10 displays the position input screen as described with reference to FIG. 2 on the display screen of the WEB page. In step S203, as described with reference to FIG. 2, the user selects an autonomous mobile object (mobility) and inputs position information (hereinafter referred to as position information) indicating a departure point, a transit point, and an arrival point.
 位置情報は、例えば建物名や駅名や住所など、特定の場所を指定するワード(以下、位置ワード)でもよいし、WEBページに表示された地図の特定の位置をポイント(以下、ポイント)として指定する手法でもよい。 The location information may be a word (hereinafter referred to as a location word) that specifies a specific location such as a building name, station name, or address, or a specific location on a map displayed on a web page as a point (hereinafter referred to as a point). It is also possible to use a method to
 ステップS204において、システム制御装置10は選択された自律移動体12の移動体の種別情報と、入力された位置情報などの入力情報とを保存する。この時、システム制御装置10は位置情報が位置ワードの場合は、位置ワードを保存する。また、位置情報がポイントの場合は、システム制御装置10は、位置/経路情報管理部10-3に保存してある簡易的な地図情報を基に、ポイントに該当する緯度および経度を探索し、緯度および経度を保存する。 In step S204, the system control device 10 saves the type information of the selected autonomous mobile body 12 and the input information such as the input position information. At this time, if the position information is a position word, the system controller 10 stores the position word. Also, if the position information is a point, the system control device 10 searches for the latitude and longitude corresponding to the point based on the simple map information stored in the position/route information management unit 10-3, Store latitude and longitude.
 次に、ステップS205において、システム制御装置10はユーザによって指定された自律移動体12のモビリティ形式(移動体の種別)から、移動できる経路の種別(以下、経路種別)を指定する。そして、ステップS206において、位置情報とともに経路決定装置13に送信する。 Next, in step S205, the system control device 10 designates the type of route that can be moved (hereinafter referred to as route type) from the mobility type (moving body type) of the autonomous mobile body 12 designated by the user. Then, in step S206, it is transmitted to the route determination device 13 together with the positional information.
 前述のように、モビリティ形式とは、例えば、法的に区別された移動体の種別等であり、例えば自動車、自転車、ドローンなどの種別等である。又、経路種別は、例えば、一般道、高速道路、自動車専用道路、所定の歩道、一般道の路側帯および自転車専用レーンである。 As mentioned above, the type of mobility is, for example, a legally distinguished type of mobile object, such as a car, bicycle, or drone. The route types are, for example, general roads, expressways, motorways, predetermined sidewalks, roadside strips of general roads, and bicycle lanes.
 例えばモビリティ形式が自動車である場合、経路種別は一般道や高速道路、自動車専用道路等が指定される。また、モビリティ形式が自転車である場合、所定の歩道、一般道の路側帯、自転車専用レーンなどが指定される。 For example, if the mobility type is a car, the route type is specified as a general road, expressway, or car-exclusive road. In addition, when the mode of mobility is a bicycle, a predetermined sidewalk, a side strip of a general road, a bicycle lane, etc. are specified.
 ステップS207において、経路決定装置13は、受信した位置情報を、所有する地図情報に出発地点、経由地、および到着地点として入力する。位置情報が位置ワードの場合は、位置ワードにより地図情報で探索(事前探索)し、該当する緯度/経度情報を入力する。位置情報が緯度/経度情報の場合はそのまま地図情報に入力して使用する。更に経路決定装置13は経路を事前に探索してもよい。 In step S207, the route determination device 13 inputs the received positional information into the owned map information as a departure point, a transit point, and an arrival point. If the positional information is a positional word, a search (preliminary search) is performed using the map information using the positional word, and the relevant latitude/longitude information is input. If the position information is latitude/longitude information, it is used by inputting it into the map information as it is. Furthermore, the route determination device 13 may search for routes in advance.
 続いて、ステップS208で、経路決定装置13は出発地点から経由地点を経由して到着地点までの経路を探索する。この時、探索する経路は経路種別に則った経路を検索する。なお、ステップS208において経路を事前探索していた場合は、経路種別に基づいて事前に探索した経路を適宜変更する。 Subsequently, in step S208, the route determination device 13 searches for a route from the departure point to the arrival point via the intermediate points. At this time, the route to be searched is searched according to the route type. If a route has been searched in advance in step S208, the route searched in advance is appropriately changed based on the route type.
 そして、ステップS209で、経路決定装置13は探索の結果として、出発地点から経由地点を経由して到着地点までの経路(以下、経路情報)をGPX形式(GPS eXchange Format)で出力し、システム制御装置10に送信する。 Then, in step S209, the route determination device 13 outputs, as a result of the search, a route from the departure point to the arrival point via the waypoints (hereinafter referred to as route information) in GPX format (GPS eXchange Format), and system control is performed. Send to device 10 .
 GPX形式のファイルは、ウェイポイント(順序関係を持たない地点情報)、ルート(時間情報を付加した順序関係を持つ地点情報)、トラック(複数の地点情報の集合体:軌跡)の3種類で主に構成されている。 GPX format files are mainly divided into three types: waypoints (point information without order), routes (point information with order with time information added), and tracks (collection of multiple point information: trajectories). is configured to
 更に、各地点情報の属性値として緯度/経度、子要素として標高やジオイド高、GPS受信状況・精度などが記載される。GPXファイルに必要な最小要素は、単一ポイントの緯度/経度情報で、それ以外の情報の記述は任意である。経路情報として出力するのはルートであり、順序関係を持つ緯度/経度からなる地点情報の集合体である。尚、経路情報は上記を満足できれば他の形式であっても良い。 In addition, latitude/longitude is described as the attribute value of each point information, altitude, geoid height, GPS reception status/accuracy, etc. are described as child elements. The minimum element required for a GPX file is latitude/longitude information for a single point, and any other information is optional. A route is output as route information, and is a set of point information consisting of latitude/longitude having an order relationship. Note that the route information may be in another format as long as it satisfies the above requirements.
 ここで、変換情報保持装置14のフォーマットデータベース14-4で管理しているフォーマットの構成例に関して図11(A)、図11(B)、図12を参照して詳しく説明する。 Here, a configuration example of formats managed by the format database 14-4 of the conversion information holding device 14 will be described in detail with reference to FIGS. 11(A), 11(B) and 12. FIG.
 図11(A)は地球の緯度/経度情報を示す図であり、図11(B)は図11(A)の所定の空間100を示す斜視図である。又、図11(B)において所定の空間100の中心を中心101とする。図12は空間100内の空間情報を模式的に示した図である。 FIG. 11(A) is a diagram showing latitude/longitude information of the earth, and FIG. 11(B) is a perspective view showing the predetermined space 100 in FIG. 11(A). Also, in FIG. 11B, the center of the predetermined space 100 is defined as the center 101. As shown in FIG. FIG. 12 is a diagram schematically showing spatial information in the space 100. As shown in FIG.
 図11(A)、図11(B)において、フォーマットは、地球の3次元空間を、緯度/経度/高さを起点とした範囲によって決定される所定の単位体積ごとの空間に分割し、夫々の空間に固有識別子を付加して管理可能とするものである。例えばここでは所定の3次元の空間として空間100を表示する。 In FIGS. 11(A) and 11(B), the format divides the three-dimensional space of the earth into spaces for each predetermined unit volume determined by the range starting from latitude/longitude/height. A unique identifier is added to the space to make it manageable. For example, here the space 100 is displayed as a predetermined three-dimensional space.
 空間100は、中心101が北緯20度、東経140度、高さ(高度、標高)Hにより規定され、緯度方向の幅をD、経度方向の幅をW、高さ方向の幅をTと規定された分割空間である。又、地球の空間を緯度/経度/高さを起点とした範囲によって決定される空間に分割した1つの空間である。 A space 100 is defined by a center 101 of 20 degrees north latitude, 140 degrees east longitude, and height (altitude, altitude) H, and the width in the latitudinal direction is defined as D, the width in the longitudinal direction as W, and the width in the height direction as T. is a partitioned space. In addition, it is one space obtained by dividing the space of the earth into spaces determined by ranges starting from latitude/longitude/height.
 図11(A)においては便宜上、空間100のみを表示しているが、フォーマットの規定においては前述のとおり空間100と同じように規定された空間が緯度/経度/高さ方向に並んで配置されているものとする。そして配置された各分割空間は夫々緯度/経度によって水平位置を定義されているとともに、高さ方向にも重なりを持ち、高さによって高さ方向の位置を定義されているものとする。 For the sake of convenience, only the space 100 is shown in FIG. 11(A), but in the definition of the format, spaces defined in the same manner as the space 100 are arranged side by side in the latitude/longitude/height directions as described above. shall be It is assumed that each of the arranged divided spaces has its horizontal position defined by latitude/longitude, overlaps in the height direction, and the position in the height direction is defined by height.
 尚、図11(B)において緯度/経度/高さの起点として、分割空間の中心101を設定しているが、これに限定するものではなく、例えば空間の角部や、底面の中心を起点としても良い。又、形状も略直方体であればよく、地球のような球体表面上に敷き詰める場合を考えた時は、直方体の底面よりも天面のほうをわずかに広く設定したほうが、より隙間なく配置できる。 Although the center 101 of the divided space is set as the starting point of latitude/longitude/height in FIG. 11B, the starting point is not limited to this. It is good as Also, the shape may be a substantially rectangular parallelepiped, and when considering the case of laying on a spherical surface such as the earth, it is better to set the top surface of the rectangular parallelepiped slightly wider than the bottom surface, so that it can be arranged without gaps.
 図12において空間100を例にすると、フォーマットデータベース14-4には空間100の範囲に存在又は進入可能な物体の種別と時間制限に関する情報(空間情報)が夫々固有識別子と関連付けて(紐づけて)フォーマット化されて保存されている。又、フォーマット化された空間情報は、過去から未来といった時系列に保管されている。尚、本実施形態においては、関連付けると紐づけるとは同じ意味で用いる。 Taking the space 100 as an example in FIG. 12, in the format database 14-4, information (spatial information) on the types of objects that exist or can enter the range of the space 100 and time limits are associated with unique identifiers. ) is formatted and stored. Also, the formatted spatial information is stored in chronological order from the past to the future. Note that, in the present embodiment, associating and linking are used in the same meaning.
 即ち、変換情報保持装置14は、緯度/経度/高さによって定義される3次元の空間に存在又は進入可能な物体の種別と時間制限に関する空間情報を固有識別子と関連付けてフォーマット化しフォーマットデータベース14-4に保存している。 That is, the conversion information holding device 14 associates with the unique identifier the spatial information regarding the types of objects that can exist or can enter a three-dimensional space defined by latitude/longitude/height and the time limit, and formats the format database 14-. Saved in 4.
 空間情報は、変換情報保持装置14に通信可能に接続された外部システム(例えばセンサノード15)などの情報供給手段により供給された情報に基づき所定の更新間隔で更新される。そして、変換情報保持装置14に通信可能に接続された他の外部システムに情報共有される。 The spatial information is updated at predetermined update intervals based on information supplied by information supply means such as an external system (for example, the sensor node 15) communicatively connected to the conversion information holding device 14. Then, the information is shared with other external systems communicably connected to the conversion information holding device 14 .
 尚、時間に関する情報を必要としない用途においては、時間に関する情報を含まない空間情報を使用することも可能である。又、固有識別子の代わりに、固有でない識別子を用いても良い。 In applications that do not require time-related information, it is also possible to use spatial information that does not contain time-related information. Also, non-unique identifiers may be used instead of unique identifiers.
 また、外部システムを有する事業者/個人の情報、外部システムが取得した検出情報へのアクセス方法の情報、検出情報のメタデータ/通信形式などの検出情報の仕様情報も空間情報として、固有識別子と関連付けて管理することができる。 In addition, information on operators/individuals who have external systems, information on how to access detection information acquired by external systems, and specification information on detection information such as metadata/communication format of detection information are also used as spatial information, as unique identifiers. can be associated and managed.
 以上のように、実施形態1では、緯度/経度/高さによって定義される3次元の空間に存在又は進入可能な物体の種別と時間制限に関する情報(以下、空間情報)を固有識別子と関連付けてフォーマット化してデータベースに保存している。そしてフォーマット化された空間情報によって時空間を管理可能としている。 As described above, in the first embodiment, information about the type of an object that can exist or enter a three-dimensional space defined by latitude/longitude/height and the time limit (hereinafter referred to as spatial information) is associated with a unique identifier. formatted and stored in the database. Space-time can be managed by formatted spatial information.
 なお、本実施形態においては空間(ボクセル)の位置を規定する座標系として緯度/経度/高さを用いて説明していく。しかし、座標系はこれに限定されたものではなく、例えば任意の座標軸を有するXYZ座標系や、水平方向の座標としてMGRS(Military Grid Reference System)を用いるなど、様々な座標系を用いることができる。 In addition, in this embodiment, latitude/longitude/height will be used as a coordinate system that defines the position of the space (voxel). However, the coordinate system is not limited to this, and various coordinate systems can be used, such as an XYZ coordinate system having arbitrary coordinate axes, or using MGRS (Military Grid Reference System) as horizontal coordinates. .
 その他、画像の画素位置を座標として利用するピクセル座標系や、所定の領域をタイルという単位で分割し、X/Y方向に並べて表現するタイル座標系を用いることもできる。 In addition, it is also possible to use a pixel coordinate system that uses the pixel positions of an image as coordinates, or a tile coordinate system that divides a predetermined area into units called tiles and expresses them by arranging them in the X/Y directions.
 又、実施形態1の変換情報保持装置14は、空間情報の更新間隔に関する情報も固有識別子と関連付けてフォーマット化し保存するフォーマット化ステップを実行している。尚、固有識別子と関連付けてフォーマット化する更新間隔に関する情報は更新頻度であっても良く、更新間隔に関する情報は更新頻度を含む。 In addition, the conversion information holding device 14 of the first embodiment executes a formatting step of formatting and saving information about update intervals of spatial information in association with unique identifiers. The update interval information formatted in association with the unique identifier may be the update frequency, and the update interval information includes the update frequency.
 図8に戻り、自律移動体制御システムが実行する処理の続きを説明する。ステップS210において、システム制御装置10は、受信した経路情報内の各地点情報間の間隔を確認する。そして、地点情報の間隔とフォーマットで規定する分割空間の起点位置同士の間隔とを整合したものを位置点群データとして作成する。 Returning to FIG. 8, the continuation of the processing executed by the autonomous mobile body control system will be described. In step S210, the system control device 10 confirms the interval between each piece of point information in the received route information. Then, the position point group data is created by matching the interval of the point information with the interval between the starting point positions of the divided spaces defined by the format.
 この時、地点情報の間隔が分割空間の起点位置同士の間隔より小さい場合、システム制御装置10は分割空間の起点位置間隔に合わせて経路情報内の地点情報を間引いたものを位置点群データとする。又、地点情報の間隔が分割空間の起点位置同士の間隔より大きい場合、システム制御装置10は経路情報から逸脱しない範囲で地点情報を補間して位置点群データとする。 At this time, if the interval of the point information is smaller than the interval between the starting point positions of the divided spaces, the system control device 10 thins out the point information in the route information according to the interval of the starting point positions of the divided spaces, and uses it as position point cloud data. do. Also, if the interval of point information is larger than the interval between the starting point positions of the divided spaces, the system control device 10 interpolates the point information within a range that does not deviate from the route information to obtain position point group data.
 次に、図9のステップS211に示すように、システム制御装置10は、位置点群データの各地点情報の緯度/経度情報を、変換情報保持装置14に、経路の順番に送信する。又、ステップS212において、変換情報保持装置14は受信した緯度/経度情報に該当する固有識別子をフォーマットデータベース14-4から探索し、ステップS213において、システム制御装置10に送信する。  Next, as shown in step S211 in Fig. 9, the system control device 10 transmits the latitude/longitude information of each point information of the position point cloud data to the conversion information holding device 14 in the order of the route. In step S212, the conversion information holding device 14 searches the format database 14-4 for a unique identifier corresponding to the received latitude/longitude information, and transmits it to the system control device 10 in step S213.
 ステップS214において、システム制御装置10は受信した固有識別子を元の位置点群データと同じ順に並べ、固有識別子を用いた経路情報(以下、フォーマット経路情報)として保管する。このように、ステップS214においては、経路生成手段としてのシステム制御装置10は、変換情報保持装置14のデータベースから空間情報を取得し、取得した空間情報と、移動体の種別情報に基づき移動体の移動経路に関する経路情報を生成している。 In step S214, the system control device 10 arranges the received unique identifiers in the same order as the original position point cloud data, and stores them as route information using the unique identifiers (hereinafter referred to as format route information). Thus, in step S214, the system control device 10 as the route generation means acquires the spatial information from the database of the conversion information holding device 14, and based on the acquired spatial information and the type information of the mobile object, Generating route information about travel routes.
 ここで、経路情報から位置点群データを生成し、固有識別子を用いた経路情報に変換する過程を、図13(A)、図13(B)、図13(C)を参照して詳細に説明する。図13(A)は経路情報を地図情報で表示したイメージ図、図13(B)は位置点群データを用いた経路情報を地図情報で表示したイメージ図、図13(C)は固有識別子を用いた経路情報を地図情報で表示したイメージ図である。 Here, the process of generating position point cloud data from route information and converting it into route information using unique identifiers will be described in detail with reference to FIGS. explain. FIG. 13(A) is an image diagram of route information displayed as map information, FIG. 13(B) is an image diagram of route information using position point cloud data displayed as map information, and FIG. 13(C) is an image diagram using unique identifiers. FIG. 10 is an image diagram showing route information as map information;
 図13(A)において、120は経路情報、121は自律移動体12が通過できない移動不可領域、122は自律移動体12が移動可能な移動可能領域である。ユーザが指定した出発地点、経由地点、到着地点の位置情報をもとに、経路決定装置13により生成された経路情報120は、出発地点、経由地点、到着地点を通過し、かつ地図情報上で移動可能領域122上を通る経路として生成されている。 In FIG. 13(A), 120 is route information, 121 is a non-movable area through which the autonomous mobile body 12 cannot pass, and 122 is a movable area where the autonomous mobile body 12 can move. The route information 120 generated by the route determining device 13 based on the positional information of the departure point, waypoints, and arrival points designated by the user passes through the departure point, waypoints, and arrival points, and is displayed on the map information. It is generated as a route passing over the movable area 122 .
 図13(B)において、123は経路情報上の複数の位置情報である。経路情報120を取得したシステム制御装置10は、経路情報120上に、所定の間隔で配置した位置情報123を生成する。 In FIG. 13(B), 123 is a plurality of pieces of position information on route information. After acquiring the route information 120 , the system control device 10 generates position information 123 arranged at predetermined intervals on the route information 120 .
 位置情報123は夫々緯度/経度/高さで表すことができ、これら位置情報123を実施形態1では位置点群データと呼ぶ。そして、システム制御装置10はこれら位置情報123(各点の緯度/経度/高さ)を1つずつ変換情報保持装置14に送信し、固有識別子に変換する。 The position information 123 can be represented by latitude/longitude/height, respectively, and this position information 123 is called position point cloud data in the first embodiment. Then, the system control device 10 transmits the position information 123 (latitude/longitude/height of each point) one by one to the conversion information holding device 14 and converts them into unique identifiers.
 図13(C)において、124は位置情報123を1つずつ固有識別子に変換し、固有識別子が規定する空間範囲を四角い枠で表現した位置空間情報である。位置情報を固有識別子に変換することで、位置空間情報124が得られる。これにより、経路情報120が表現していた経路を、連続した位置空間情報124に変換して表現する。 In FIG. 13(C), 124 is positional space information in which the positional information 123 is converted into a unique identifier one by one, and the spatial range defined by the unique identifier is represented by a rectangular frame. The location space information 124 is obtained by converting the location information into a unique identifier. As a result, the route represented by the route information 120 is converted into continuous position space information 124 and represented.
 尚、各位置空間情報124には、空間の範囲に存在又は進入可能な物体の種別と時間制限に関する情報が紐づけられている。この連続した位置空間情報124を実施形態1ではフォーマット経路情報と呼ぶ。 Each piece of position space information 124 is associated with information about the types of objects that can exist or enter the space and the time limit. This continuous position space information 124 is called format route information in the first embodiment.
 図9に戻り、自律移動体制御システムが実行する処理の続きを説明する。ステップS214の次に、ステップS215において、システム制御装置10はフォーマット経路情報の各固有識別子に紐づけられた空間情報を変換情報保持装置14からダウンロードする。 Returning to FIG. 9, the continuation of the processing executed by the autonomous mobile body control system will be described. After step S214, in step S215, the system control device 10 downloads the spatial information associated with each unique identifier of the format path information from the conversion information holding device 14. FIG.
 そしてステップS216で、システム制御装置10は、空間情報を、自律移動体12のサイバー空間の3次元マップに反映できる形式に変換して、所定空間内の複数物体(障害物)の位置を示す情報(以下、コストマップ)を作成する。コストマップは、フォーマット経路情報のすべての経路の空間に関して初めに作成しても良いし、一定領域で区切った形で作成し、順次更新していく方法で作成しても良い。 Then, in step S216, the system control device 10 converts the space information into a format that can be reflected in the three-dimensional map of the cyberspace of the autonomous mobile body 12, and information indicating the positions of multiple objects (obstacles) in the predetermined space. (hereafter, cost map). The cost map may be created with respect to the space of all routes in the format route information at first, or may be created in a form divided by fixed areas and updated sequentially.
 次に、ステップS217において、システム制御装置10は、フォーマット経路情報とコストマップを、自律移動体12に割り当てられた固有識別番号(固有識別子)に紐づけて保管する。 Next, in step S217, the system control device 10 associates the format route information and the cost map with the unique identification number (unique identifier) assigned to the autonomous mobile body 12 and stores them.
 自律移動体12は所定時間間隔で、自己の固有識別番号をネットワークを介して監視(以下、ポーリング)しており、ステップS218において、紐づけられたコストマップをダウンロードする。自律移動体12はステップS219において、フォーマット経路情報の各固有識別子の緯度/経度情報を、自己が作成したサイバー空間の3次元マップに対して経路情報として反映させる。 The autonomous mobile body 12 monitors (hereinafter, polling) its own unique identification number via the network at predetermined time intervals, and downloads the associated cost map in step S218. In step S219, the autonomous mobile body 12 reflects the latitude/longitude information of each unique identifier of the format route information as route information on the three-dimensional map of cyberspace created by itself.
 次に、ステップS220において、自律移動体12はコストマップをルート上の障害物情報としてサイバー空間の3次元マップに反映する。コストマップが一定間隔で区切った形で作成されている場合は、コストマップが作成された領域を移動した後に、次の領域のコストマップをダウンロードし、コストマップを更新する。 Next, in step S220, the autonomous mobile body 12 reflects the cost map on the three-dimensional map of cyberspace as obstacle information on the route. If the cost map is created in a form that is divided at regular intervals, after moving the area in which the cost map was created, download the cost map of the next area and update the cost map.
 ステップS221において、自律移動体12は、経路情報に沿ってコストマップで入力された物体(障害物)を回避しながら移動する。即ち、コストマップに基づき移動制御を行う。 In step S221, the autonomous mobile body 12 moves along the route information while avoiding the objects (obstacles) entered in the cost map. That is, movement control is performed based on the cost map.
 この時、ステップS222において、自律移動体12は物体検出を行いながら移動し、コストマップとの差異があれば物体検出情報を用いてコストマップを更新しつつ移動する。又、ステップS223において、自律移動体12はコストマップとの差異情報を、対応する固有識別子とともにシステム制御装置10に送信する。 At this time, in step S222, the autonomous mobile body 12 moves while performing object detection, and moves while updating the cost map using the object detection information if there is a difference from the cost map. Also, in step S223, the autonomous mobile body 12 transmits difference information from the cost map to the system control device 10 together with the corresponding unique identifier.
 固有識別子と、コストマップとの差異情報を取得したシステム制御装置10は、図10のステップS224において、変換情報保持装置14に空間情報を送信し、ステップS225で、変換情報保持装置14は該当する固有識別子の空間情報を更新する。 The system control device 10 that has acquired the difference information between the unique identifier and the cost map transmits the spatial information to the conversion information holding device 14 in step S224 of FIG. Update the spatial information of the unique identifier.
 ここで更新する空間情報の内容は、コストマップとの差異情報をそのまま反映するわけではなく、システム制御装置10にて抽象化されてから変換情報保持装置14に送信される。抽象化の詳細な内容に関しては後述する。 The content of the spatial information updated here does not directly reflect the difference information from the cost map, but is abstracted by the system control device 10 and then sent to the conversion information holding device 14 . Details of the abstraction will be described later.
 フォーマット経路情報に基づき移動している自律移動体12は、ステップS226において、各固有識別子に紐づけられた分割空間を通過するごとにシステム制御装置10に対して現在自身が通過している空間に紐づけられた固有識別子を送信する。 In step S226, the autonomous mobile body 12 moving based on the format route information informs the system controller 10 of the space it is currently passing through each time it passes through the divided space associated with each unique identifier. Send the associated unique identifier.
 もしくはポーリング時に、自身の固有識別番号に紐づけても良い。システム制御装置10は、自律移動体12から受け取る、空間の固有識別子情報を基に、フォーマット経路情報上の自律移動体12の現在位置を把握する。 Alternatively, you can associate it with your own unique identification number when polling. Based on the space unique identifier information received from the autonomous mobile body 12, the system control device 10 grasps the current position of the autonomous mobile body 12 on the format route information.
 ステップS226を繰り返すことで、システム制御装置10はフォーマット経路情報の中で、自律移動体12が現在どこにいるのかを把握することができる。尚、自律移動体12が通過した空間の固有識別子に関して、システム制御装置10は保持することをやめてもよく、それによりフォーマット経路情報の保持データ容量を削減することもできる。 By repeating step S226, the system control device 10 can grasp where the autonomous mobile body 12 is currently located in the format route information. Incidentally, the system control device 10 may stop holding the unique identifier of the space through which the autonomous mobile body 12 has passed, thereby reducing the holding data capacity of the format route information.
 ステップS227において、システム制御装置10は把握した自律移動体12の現在位置情報を基に、図2及び図3で説明した確認画面50及び地図表示画面60を作成し、WEBページの表示画面に表示する。自律移動体12により、現在位置を示す固有識別子がシステム制御装置10に送信されるたびに、システム制御装置10は確認画面50及び地図表示画面60を更新する。 In step S227, the system control device 10 creates the confirmation screen 50 and the map display screen 60 described with reference to FIGS. do. The system control device 10 updates the confirmation screen 50 and the map display screen 60 each time a unique identifier indicating the current position is transmitted from the autonomous mobile body 12 to the system control device 10 .
 一方、図8のステップS228において、センサノード15は検出範囲の検出情報を保存するとともに、ステップS229において検出情報を抽象化して、ステップS230において空間情報として変換情報保持装置14に送信する。抽象化とは、例えば物体が存在しているか否か、物体の存在状態に変化があったか否かといった情報であり、物体に関する詳細情報ではない。 On the other hand, in step S228 of FIG. 8, the sensor node 15 saves the detection information of the detection range, abstracts the detection information in step S229, and transmits it as spatial information to the conversion information holding device 14 in step S230. Abstraction is information such as whether or not an object exists, or whether or not the state of existence of the object has changed, and is not detailed information about the object.
 物体に関する詳細情報はセンサノード内のメモリに保管される。そして、ステップS231において、変換情報保持装置14は、抽象化された検出情報である空間情報を、空間情報に対応する位置の固有識別子に紐づけて保管する。これにより、フォーマットデータベース内の1つの固有識別子に空間情報が格納されることになる。  Detailed information about the object is stored in the memory inside the sensor node. Then, in step S231, the conversion information holding device 14 stores the spatial information, which is the abstracted detection information, in association with the unique identifier of the position corresponding to the spatial information. This results in spatial information being stored in one unique identifier in the format database.
 又、センサノード15とは異なる外部システムが空間情報を活用する場合、外部システムは変換情報保持装置14内の空間情報を基に、変換情報保持装置14を経由してセンサノード15内の検出情報を取得して活用する。この時、変換情報保持装置14は外部システムとセンサノード15の通信規格をつなぐ機能も有する。 Further, when an external system different from the sensor node 15 utilizes the spatial information, the external system uses the spatial information in the conversion information holding device 14 to obtain the detected information in the sensor node 15 via the conversion information holding device 14. is obtained and utilized. At this time, the conversion information holding device 14 also has a function of connecting the communication standards of the external system and the sensor node 15 .
 上記のような空間情報の格納をセンサノード15に限らず複数デバイス間で行うことで、変換情報保持装置14は比較的軽量なデータ量にて複数のデバイスのデータをつなぐ機能を有する。尚、図9のステップS215、S216においてシステム制御装置10がコストマップを作成の際に詳細な物体情報を必要とする場合は、空間情報の詳細な検出情報を保管している外部システムから詳細情報をダウンロードして使用すれば良い。 By storing spatial information as described above not only in the sensor node 15 but also among multiple devices, the conversion information holding device 14 has a function of connecting data of multiple devices with a relatively small amount of data. In steps S215 and S216 of FIG. 9, when the system control device 10 needs detailed object information when creating a cost map, detailed information is sent from an external system storing detailed detection information of spatial information. should be downloaded and used.
 ここで、自律移動体12のフォーマット経路情報の経路上において、センサノード15が空間情報を更新したとする。この時、図10のステップS232でセンサノード15は検出情報を取得し、ステップS233で抽象化された空間情報を生成して、ステップS234で変換情報保持装置14に送信する。変換情報保持装置14は、ステップS235で空間情報をフォーマットデータベース14-4に格納する。 Here, assume that the sensor node 15 updates the spatial information on the route of the format route information of the autonomous mobile body 12 . At this time, the sensor node 15 acquires detection information in step S232 of FIG. 10, generates abstracted spatial information in step S233, and transmits it to the conversion information holding device 14 in step S234. The conversion information holding device 14 stores the spatial information in the format database 14-4 in step S235.
 システム制御装置10は、管理するフォーマット経路情報における空間情報の変化を所定の時間間隔で確認しており、変化があればステップS236で空間情報をダウンロードする。そして、ステップS237で自律移動体12に割り当てられた固有識別番号に紐づけられたコストマップを更新する。 The system control device 10 checks changes in the spatial information in the managed format path information at predetermined time intervals, and downloads the spatial information in step S236 if there is a change. Then, in step S237, the cost map associated with the unique identification number assigned to the autonomous mobile body 12 is updated.
 自律移動体12はステップS238において、ポーリングにてコストマップの更新を認識し、自己が作成したサイバー空間の3次元マップに反映する。 In step S238, the autonomous mobile body 12 recognizes the update of the cost map by polling and reflects it in the three-dimensional map of cyberspace created by itself.
 以上のように、複数デバイスで共有された空間情報を活用することで、自律移動体12は自己が認識できないルート上の変化を事前に認識でき、その変化に対応することができる。
 上記一連のシステムを遂行し、ステップS239で自律移動体12が到着地点に到着した場合には、ステップS240で固有識別子を送信する。
As described above, by utilizing spatial information shared by a plurality of devices, the autonomous mobile body 12 can recognize in advance a change in the route that the self cannot recognize, and can respond to the change.
After performing the above series of systems, when the autonomous mobile body 12 arrives at the arrival point in step S239, a unique identifier is transmitted in step S240.
 これにより固有識別子を認識したシステム制御装置10は、ステップS241で、到着表示をユーザインターフェース11に表示し、アプリを終了する。 Upon recognizing the unique identifier, the system control device 10 displays an arrival indication on the user interface 11 in step S241, and terminates the application.
 実施形態1によれば、以上のようにしてデジタルアーキテクチャのフォーマット及びそれを用いた自律移動体制御システムを提供することができる。 According to Embodiment 1, it is possible to provide a digital architecture format and an autonomous mobile body control system using the format as described above.
 図11(A)、(B)、図12で説明したように、フォーマットデータベース14-4には空間100の範囲に存在又は進入可能な物体の種別と時間制限に関する情報(空間情報)が過去から未来といった時系列に保管されている。又、空間情報は、変換情報保持装置14に通信可能に接続された外部センサなどから入力された情報に基づき更新され、変換情報保持装置14に接続可能な他の外部システムに情報共有されている。 As described with reference to FIGS. 11A, 11B, and 12, the format database 14-4 stores information (spatial information) about the types of objects that can exist or enter the space 100 and the time limit from the past. It is stored in chronological order such as the future. The spatial information is updated based on information input from an external sensor or the like communicably connected to the conversion information holding device 14, and is shared with other external systems that can be connected to the conversion information holding device 14. .
 これらの空間情報の1つとして、空間内の物体の種別情報がある。ここでの空間内の物体の種別情報は例えば道路における車道、歩道、自転車専用道路等、地図情報より取得可能な情報である。また他には車道におけるモビリティの進行方向や交通規制等の情報も同様に種別情報と定義することが出来る。更に後述するように空間自体に種別情報を定義することも出来る。 One of these spatial information is the type information of objects in the space. The type information of objects in the space here is information that can be obtained from map information, such as roadways, sidewalks, and bicycle lanes on roads. In addition, information such as the traveling direction of mobility on a roadway, traffic regulations, etc. can also be defined as type information. Furthermore, as will be described later, it is also possible to define type information in the space itself.
 以上、図4を用いて、変換情報保持装置14と自律移動体12の制御を行うシステム制御装置10等の連携動作の説明を行った。しかし、変換情報保持装置14はシステム制御装置10以外にも、道路の情報を管理するシステム制御装置や、道路以外の区画の情報を管理するシステム制御装置と接続することができる。 The coordinated operation of the conversion information holding device 14 and the system control device 10 that controls the autonomous mobile body 12 has been described above with reference to FIG. However, in addition to the system control device 10, the conversion information holding device 14 can be connected to a system control device that manages information on roads and a system control device that manages information on sections other than roads.
 即ち、前述のように、システム制御装置10は図13(B)の位置情報123を総称した位置点群データを変換情報保持装置14に送信できる。それと同様に、道路の情報を管理するシステム制御装置や、道路以外の区画の情報を管理するシステム制御装置もそれに相当するデータを変換情報保持装置14に送信できる。 That is, as described above, the system control device 10 can transmit position point cloud data collectively representing the position information 123 of FIG. Similarly, a system control device that manages information on roads and a system control device that manages information on sections other than roads can also transmit corresponding data to the conversion information holding device 14 .
 それに相当するデータとは、道路の情報を管理するシステム制御装置や、道路以外である区画の情報を管理するシステム制御装置が管理する位置点群データの情報である。尚、位置点群データの各々の点を位置点と以降呼ぶこととする。 The corresponding data is the position point cloud data information managed by the system control device that manages road information and the system control device that manages information on sections other than roads. Each point of the position point cloud data is hereinafter referred to as a position point.
 送信した後は、フォーマットデータベース14-4の固有識別子に紐づけて格納し、適宜その情報を更新することで、現在の現実世界の情報を正確に変換情報保持装置14に反映し、自律移動体12の移動に支障がないようにする。 After transmission, it is stored in association with the unique identifier of the format database 14-4, and by updating the information as appropriate, the current real world information is accurately reflected in the conversion information holding device 14, and the autonomous mobile body Make sure that there is no hindrance to the movement of 12.
 尚、実施形態1では、空間情報の更新間隔は、その空間に存在する物体の種類に応じて異なる。即ち、その空間に存在する物体の種類が移動体の場合には、その空間に存在する物体の種類が移動体でない場合よりも短くなるようにする。又、空間に存在する物体の種類が道路の場合には、空間に存在する物体の種類が区画の場合よりも短くなるようにする。 It should be noted that in Embodiment 1, the space information update interval differs according to the type of object existing in the space. That is, when the type of object existing in the space is a moving object, the length of time is set to be shorter than when the type of object existing in the space is not a moving object. Also, when the type of the object existing in the space is a road, the type of the object existing in the space is made shorter than in the case of the partition.
 又、空間に複数の物体が存在する場合には、夫々の物体に関する空間情報の更新間隔は、夫々の物体の種類(例えば移動体、道路、区画等)に応じて夫々異なるようにする。そして、空間に存在する複数の物体の夫々の状態と時間に関する空間情報を固有識別子と関連付けてフォーマット化して保存するように構成している。従って、空間情報の更新のための負荷を低減することができる。 Also, when there are multiple objects in the space, the update interval of the space information about each object should be different according to the type of each object (eg moving body, road, section, etc.). Spatial information about the state and time of each of a plurality of objects existing in the space is associated with the unique identifier, formatted and stored. Therefore, the load for updating spatial information can be reduced.
 なお、本実施形態における説明では位置点群データから後述するフォーマット経路情報を作成する際にフォーマット経路情報を構成する固有識別子によって特定される空間(=ボクセル)が隙間なく数珠繋ぎとなるように、位置点群データの間引き/補間を行っている。 In the description of the present embodiment, when creating format route information (to be described later) from position point cloud data, position Thinning out/interpolating point cloud data.
 しかしこれに限定されるものではなく、少なくとも位置点群データを構成する地点情報の間隔は、分割空間の起点(=基準点)位置同士の間隔以上あり、分割空間同士が被らないようにして移動経路を設定できる。 However, the present invention is not limited to this, and at least the interval of the point information constituting the position point cloud data should be equal to or greater than the interval between the starting point (=reference point) positions of the divided spaces so that the divided spaces do not overlap each other. You can set the route of movement.
 位置点群データ同士の間隔が詰まっているほど、より詳細に移動経路を指定することが可能となるが、その反面、移動経路全体のデータ量は増大する。また、位置点群データ同士の間隔が大きければ、移動経路の詳細な指定はできないが移動経路全体のデータ量は抑えることができる。 The closer the space between the position point cloud data, the more detailed the movement route can be specified, but on the other hand, the amount of data for the entire movement route increases. Also, if the interval between the position point cloud data is large, the moving route cannot be specified in detail, but the amount of data for the entire moving route can be suppressed.
 つまり、自律移動体12への移動経路の指示粒度や、扱えるデータ量などの条件に合わせて、位置点群データ同士の間隔を適切に調整することができる。また、部分的に位置点群データ同士の間隔を変更し、より最適な経路設定とすることも可能である。 In other words, it is possible to appropriately adjust the intervals between position point cloud data according to conditions such as the instruction granularity of the movement route to the autonomous mobile body 12 and the amount of data that can be handled. Also, it is possible to partially change the interval between the position point cloud data to set a more optimal route.
 <実施形態2>
 実施形態2では、実施形態1の構成に加えて、自律移動体が自己位置検出機能によって正確な自己位置が検出できない場合に、フォーマットデータベースに格納された情報を基に自己位置を検出し移動するように構成されている。
<Embodiment 2>
In the second embodiment, in addition to the configuration of the first embodiment, when an autonomous mobile body cannot detect its own position accurately by the self-position detection function, it detects its own position based on the information stored in the format database and moves. is configured as
 図10のステップS226で、システム制御装置10が、自律移動体12から受け取る固有識別子情報を基に、フォーマット経路情報上の自律移動体12の現在位置を把握する動作について述べた。ここで自律移動体12が、各固有識別子が紐づく分割空間を通過したかどうかの判定は、検出部12-1内のGPSなどの自己位置検出機能によって行われる。 In step S226 of FIG. 10, the operation of the system control device 10 to grasp the current position of the autonomous mobile body 12 on the format route information based on the unique identifier information received from the autonomous mobile body 12 has been described. Here, whether or not the autonomous mobile body 12 has passed through the divided space associated with each unique identifier is determined by a self-position detection function such as GPS in the detection unit 12-1.
 そのためGPS電波が受信しにくい地点、例えば地下やトンネル内を移動する際にはその精度が低下する場合がある。またGPSなどの自己位置検出機能を有していない自律移動体は、システム制御装置に対して、フォーマット経路情報の中で現在どこにいるのかを伝えることができず、実施形態1における自律移動体制御システムを活用できない場合があった。 Therefore, the accuracy may decrease when traveling in places where GPS radio waves are difficult to receive, such as underground or tunnels. Also, an autonomous mobile body that does not have a self-position detection function such as GPS cannot tell the system control device where it is currently in the format route information, and the autonomous mobile body control in the first embodiment The system could not be used.
 そこで、本実施形態では、フォーマット化手段としての変換情報保持装置14は、その空間を特徴づける情報(以降、キーフレーム情報と記す)を、その情報が該当する位置の固有識別子に関連付けて(紐づけて)登録(保存または記録)する。キーフレーム情報は、詳細は後述するが、例えば、関連付けられた空間の周辺に同一の物体が存在しない物体を示す情報である。 Therefore, in the present embodiment, the conversion information holding device 14 as formatting means associates information characterizing the space (hereinafter referred to as key frame information) with the unique identifier of the position to which the information corresponds (string registered (saved or recorded). The key frame information, which will be described later in detail, is, for example, information indicating an object for which the same object does not exist around the associated space.
 キーフレーム情報を用いることで、自律移動体12は自身の位置を推定できる。実施形態1の図4で説明したように、自律移動体12は、機械学習などを用いて撮影画像から物体検出できる。従って、この物体検出機能によってキーフレーム情報に含まれる物体を検出することで、その検出地点をキーフレーム情報が紐づけられた固有識別子に該当する位置であると、推定(特定)することが可能である。 By using the keyframe information, the autonomous mobile body 12 can estimate its own position. As described in FIG. 4 of the first embodiment, the autonomous mobile body 12 can detect an object from a captured image using machine learning or the like. Therefore, by detecting an object included in the keyframe information with this object detection function, it is possible to estimate (specify) that the detection point is the position corresponding to the unique identifier associated with the keyframe information. is.
 このように撮像画像等を用いることによりキーフレーム情報に含まれる物体を検出し、そのキーフレーム情報が紐づけられた固有識別子に該当する位置を特定する動作を、本実施形態では「キーフレーム情報を照合する」と記載する。 In this embodiment, the operation of detecting an object included in keyframe information by using a captured image or the like and specifying a position corresponding to a unique identifier associated with the keyframe information is referred to as "keyframe information to match”.
 自律移動体12は、「キーフレーム情報を照合する」ことで、たとえ、GPS電波が受信しにくい地点であったり、自律移動体が自己位置検出機能を有していなくても、自己位置を特定することができる。自己位置は、自律移動体12が特定した、キーフレーム情報が紐づけられた固有識別子に該当する位置である。従って、自律移動体12は、その位置情報を自律移動体を制御するシステム制御装置に対して伝えることが可能となる。 By "collating key frame information", the autonomous mobile body 12 identifies its own position even if it is a point where GPS radio waves are difficult to receive or even if the autonomous mobile body does not have a self-position detection function. can do. The self position is the position corresponding to the unique identifier to which the key frame information is linked, which is specified by the autonomous mobile body 12 . Therefore, the autonomous mobile body 12 can transmit the position information to the system control device that controls the autonomous mobile body.
 本実施形態では、キーフレーム情報は空間情報の1つである。例えば、実施形態1で説明したように、フォーマットデータベース14-4には空間100の範囲に存在する物体の状態と時間に関する情報(空間情報)が過去から未来といった時系列に保管されている。 In this embodiment, keyframe information is one type of spatial information. For example, as described in the first embodiment, the format database 14-4 stores information (spatial information) regarding the state and time of objects existing within the space 100 in chronological order from the past to the future.
 又、空間情報は、変換情報保持装置14に通信可能に接続された外部システムなどにより入力された情報により更新され、変換情報保持装置14に可能に接続された他の外部システムに情報共有される。これらの空間情報の1つとして、変換情報保持装置14は、空間内の物体のキーフレーム情報を記録する。 Further, the spatial information is updated by information input by an external system or the like communicatively connected to the conversion information holding device 14, and information is shared with other external systems communicably connected to the conversion information holding device 14. . As one of these pieces of spatial information, the transformation information holding device 14 records keyframe information of objects in space.
 次に、フォーマットデータベース14-4へのキーフレーム情報の格納手順について、図14のブロック図と図15のシーケンス図を用いて説明する。図14は、実施形態2に係るキーフレーム情報の格納処理に関わる機能ブロック図であり、地図情報抽出システム901、センサノード15、自律移動体612、変換情報保持装置14の内部構成を機能ブロックとして示している。 Next, the procedure for storing key frame information in the format database 14-4 will be explained using the block diagram of FIG. 14 and the sequence diagram of FIG. FIG. 14 is a functional block diagram related to key frame information storage processing according to the second embodiment, in which the internal configurations of the map information extraction system 901, the sensor node 15, the autonomous moving body 612, and the conversion information holding device 14 are shown as functional blocks. showing.
 図14において自律移動体612は、検出部612-1、制御部612-2、方向制御部612-3、情報記憶部(メモリ/HDD)612-4、ネットワーク接続部612-5、駆動部612-6から構成されている。 In FIG. 14, the autonomous moving body 612 includes a detection unit 612-1, a control unit 612-2, a direction control unit 612-3, an information storage unit (memory/HDD) 612-4, a network connection unit 612-5, and a drive unit 612. -6.
 612-1~612-6は図4で説明した自律移動体12の構成である12-1~12-6と同様な構成なので詳細な説明を省略する。尚、自律移動体612は、図5で説明した自律移動体12と同様に物体検出機能や測距機能を有している。 612-1 to 612-6 have the same configuration as 12-1 to 12-6, which is the configuration of the autonomous mobile body 12 described in FIG. 4, so detailed description will be omitted. In addition, the autonomous mobile body 612 has an object detection function and a distance measurement function like the autonomous mobile body 12 described in FIG.
 又、図14において、地図情報抽出システム901は、地図情報管理部901-1、制御部901-2、情報抽出部901-3、情報記憶部(メモリ/HDD)901-4、ネットワーク接続部901-5を備える。 14, the map information extraction system 901 includes a map information management unit 901-1, a control unit 901-2, an information extraction unit 901-3, an information storage unit (memory/HDD) 901-4, and a network connection unit 901. -5.
 地図情報管理部901-1は、地球上の3次元空間の地図情報を保有している。情報抽出部901-3は、地図情報管理部901-1が保有する地図情報の中から位置情報や地名や施設名など設定された条件に基づき情報を抽出する機能を有する。 The map information management unit 901-1 holds map information of the three-dimensional space on the earth. The information extraction unit 901-3 has a function of extracting information based on set conditions such as location information, place names, facility names, etc. from the map information held by the map information management unit 901-1.
 制御部901-2は、地図情報抽出システム901における地図情報抽出の制御を司り、情報抽出部901-3が抽出する際の条件を設定する。そして、例えばキーフレーム情報としての要件を満たすような条件で抽出した情報を情報記憶部(メモリ/HDD)901-4で保管するとともにネットワーク接続部901-5を通じて変換情報保持装置14に送信する機能の制御を行う。図14における変換情報保持装置14とセンサノード15は図4で説明した構成と同様なため説明を省略する。 The control unit 901-2 controls map information extraction in the map information extraction system 901, and sets conditions for extraction by the information extraction unit 901-3. Then, for example, a function of storing information extracted under conditions satisfying requirements as key frame information in an information storage unit (memory/HDD) 901-4 and transmitting it to the conversion information holding device 14 through a network connection unit 901-5. control. The conversion information holding device 14 and the sensor node 15 in FIG. 14 have the same configurations as those described in FIG. 4, so description thereof will be omitted.
 図14のような構成により、実施形態2においては、地図情報抽出システム901は地図情報の中からキーフレーム情報としての要件を満たす物体の情報を抽出し、変換情報保持装置14に提供(出力)する。 With the configuration shown in FIG. 14, in the second embodiment, the map information extraction system 901 extracts object information that satisfies requirements as key frame information from the map information, and provides (outputs) the information to the conversion information holding device 14 . do.
 図15は、実施形態2に係るキーフレーム情報の格納処理を説明するシーケンス図であり、キーフレーム情報の格納に関わる、地図情報抽出システム901、センサノード15、自律移動体612、変換情報保持装置14が実行する処理を示している。尚、14,15、612,901の制御部内のコンピュータがメモリに記憶されたコンピュータプログラムを実行することによって図15のシーケンスの各ステップの動作が行われる。 FIG. 15 is a sequence diagram for explaining keyframe information storage processing according to the second embodiment. Map information extraction system 901, sensor node 15, autonomous moving body 612, and conversion information storage device related to keyframe information storage are shown in FIG. 14 indicates the processing to be executed. The computer in the control units 14, 15, 612 and 901 executes the computer program stored in the memory to perform the operation of each step in the sequence of FIG.
 先ず、変換情報保持装置14に通信可能に接続された外部システムの例として地図情報抽出システム901が地図情報からキーフレーム情報を抽出し、フォーマットデータベース14-4へ格納する方法について図15を用いて説明する。 First, the map information extraction system 901 as an example of an external system communicably connected to the conversion information holding device 14 extracts key frame information from the map information and stores it in the format database 14-4 with reference to FIG. explain.
 ステップS401において、地図情報抽出システム901は、キーフレーム情報の要件を満たす物体の情報を、所有する地図情報の所定の範囲より抽出する。ここで抽出されるキーフレーム情報の要件を満たす物体の情報は、周辺に同一のものが存在しない物体を示す情報である。 In step S401, the map information extraction system 901 extracts information on objects that satisfy the requirements for key frame information from a predetermined range of owned map information. The object information that satisfies the requirements of the key frame information extracted here is information indicating an object that does not have the same object in its surroundings.
 キーフレーム情報は、詳細については後述するが、例えば「XX交差点」や「YY郵便局」などの固有名詞が記載された案内標識(道路標識)等の標識、および看板などである。 The details of the keyframe information will be described later, but for example, signs such as guide signs (road signs) with proper nouns such as "XX intersection" and "YY post office", and billboards.
 その後、ステップS402において、地図情報抽出システム901は、抽出したキーフレーム情報を、その物体の位置情報と関連付けて(紐づけて)情報記憶部(メモリ/HDD)901-4に記憶(保存)する。 Thereafter, in step S402, the map information extraction system 901 stores (stores) the extracted key frame information in the information storage unit (memory/HDD) 901-4 in association with the position information of the object. .
 更に、ステップS403において、地図情報抽出システム901は、紐づけたキーフレーム情報と位置情報を変換情報保持装置14に送信する。 Further, in step S403, the map information extraction system 901 transmits the linked key frame information and position information to the conversion information holding device 14.
 変換情報保持装置14は、ステップS404において、送信された位置情報に対応する固有識別子を判別し、その固有識別子に対応するキーフレーム情報をフォーマットデータベース14-4に保管する。 At step S404, the conversion information holding device 14 determines the unique identifier corresponding to the transmitted position information, and stores the key frame information corresponding to the unique identifier in the format database 14-4.
 このようにして、地図情報より固有識別子に対応するキーフレーム情報をフォーマットデータベース14-4に登録(格納)することが出来る。ここでステップS404は、空間情報として、キーフレーム情報を固有識別子と関連付けてフォーマット化して登録するフォーマット化ステップとして機能している。 In this way, it is possible to register (store) the keyframe information corresponding to the unique identifier from the map information in the format database 14-4. Here, step S404 functions as a formatting step for formatting and registering the key frame information as spatial information in association with the unique identifier.
 次にキーフレーム情報を格納する別の手段として、センサノード15が取得した情報に基づきキーフレーム情報を格納する方法について記載する。図14においてセンサノード15は、例えばロードサイドカメラユニットのような映像監視システムなどの外部システムであり、情報記憶部(メモリ/HDD)15-3にはセンサノード自身の設置されている位置情報が保管されている。 Next, as another means of storing keyframe information, a method of storing keyframe information based on information acquired by the sensor node 15 will be described. In FIG. 14, the sensor node 15 is an external system such as a video surveillance system such as a roadside camera unit, and the information storage unit (memory/HDD) 15-3 stores position information of the sensor node itself. It is
 先ず、センサノード15は、ステップS411において、検出部15-1の物体検出機能及び測距機能により、自身が検出可能なエリアに存在する物体の画像情報、特徴点情報、および位置情報などの情報を検出する。そしてステップS412において、センサノード15は、検出情報を情報記憶部(メモリ/HDD)15-3に保存する。 First, in step S411, the sensor node 15 uses the object detection function and the distance measurement function of the detection unit 15-1 to collect information such as image information, feature point information, and position information of an object existing in an area detectable by itself. to detect Then, in step S412, the sensor node 15 stores the detected information in the information storage unit (memory/HDD) 15-3.
 センサノード15の制御部15-2は、ステップS413において、この物体の画像情報、特徴点情報の中から、キーフレーム情報として、キーフレーム情報の要件を満たす物体の情報を抽出する。そして、センサノード15の制御部15-2は、ステップS414において、キーフレーム情報と、キーフレーム情報の要件を満たす物体の位置情報とを紐づけて情報記憶部(メモリ/HDD)15-3に記録する。 In step S413, the control unit 15-2 of the sensor node 15 extracts object information that satisfies the requirements for keyframe information as keyframe information from the object image information and feature point information. Then, in step S414, the control unit 15-2 of the sensor node 15 associates the key frame information with the position information of the object that satisfies the requirements of the key frame information, and stores the information in the information storage unit (memory/HDD) 15-3. Record.
 ここで抽出されるキーフレーム情報の要件を満たす物体とは、例えば、地図情報から情報を抽出する場合と同じく、周辺に同一のものが存在しない物体である。その他のキーフレーム情報の要件を満たす物体については後述する。 An object that satisfies the requirements of the keyframe information extracted here is, for example, an object that does not have the same object in its vicinity, as in the case of extracting information from map information. Other objects that meet the requirements for keyframe information are described below.
 センサノード15は、ステップS415において、互いに関連付けられた(紐づけられた)キーフレーム情報と位置情報とを変換情報保持装置14に送信する。変換情報保持装置14は、ステップS416において、ステップS415で送信されてきた位置情報に対応する固有識別子を判別し、その固有識別子に対応するキーフレーム情報をフォーマットデータベース14-4に記録する。 In step S415, the sensor node 15 transmits the mutually associated (linked) key frame information and position information to the conversion information holding device 14. In step S416, conversion information holding device 14 determines a unique identifier corresponding to the position information transmitted in step S415, and records key frame information corresponding to the unique identifier in format database 14-4.
 このようにして、センサノード15が検出した情報より、固有識別子に対応するキーフレーム情報をフォーマットデータベース14-4に格納することが出来る。 In this way, the key frame information corresponding to the unique identifier can be stored in the format database 14-4 from the information detected by the sensor node 15.
 次にキーフレーム情報を格納する別の手段として、自律移動体が取得した情報に基づきキーフレーム情報を格納する方法について説明する。先ず、ステップS421において、自律移動体612は、その物体検出機能及び測距機能により、自身が検出可能なエリアに存在する物体の画像情報、特徴点情報、および位置情報などの情報を検出する。 Next, as another means of storing keyframe information, we will explain how to store keyframe information based on the information acquired by the autonomous mobile body. First, in step S421, the autonomous mobile body 612 detects information such as image information, feature point information, position information, etc. of an object existing in an area detectable by itself by its object detection function and distance measurement function.
 そしてステップS422において、自律移動体612は、ステップS421で検出した情報を情報記憶部(メモリ/HDD)612-4に保存する。更にステップS423において、自律移動体612の制御部612-2は、この物体の画像情報、特徴点情報の中から、キーフレーム情報として、キーフレーム情報の要件を満たす物体の情報を抽出する。 Then, in step S422, the autonomous mobile body 612 stores the information detected in step S421 in the information storage unit (memory/HDD) 612-4. Furthermore, in step S423, the control unit 612-2 of the autonomous mobile body 612 extracts the information of the object that satisfies the requirements of the keyframe information as the keyframe information from the image information and feature point information of this object.
 そして、ステップS424において、キーフレーム情報と、キーフレーム情の要件を満たす物体の位置情報とを関連付けて(紐づけて)制御部612-2に保管する。 Then, in step S424, the key frame information and the position information of the object that satisfies the requirements of the key frame information are associated (linked) and stored in the control unit 612-2.
 ここで抽出されるキーフレーム情報の要件を満たす物体は、例えば地図情報から情報を抽出する場合と同じく、周辺に同一のものが存在しない物体である。他にも、キーフレーム情報の要件を満たす物体として、自律移動体612が撮影した「その地点の風景の画像データ」などもある。 The objects that meet the requirements of the keyframe information extracted here are objects that do not have the same object in their surroundings, just like when extracting information from map information. In addition, as an object that satisfies the requirements of key frame information, there is also "image data of scenery at that point" photographed by the autonomous mobile body 612, and the like.
 ステップS425において、自律移動体612は、互いに関連付けられた(紐づけられた)キーフレーム情報と位置情報とを変換情報保持装置14に送信する。変換情報保持装置14は、ステップS426において、送信されてきた位置情報に対応する固有識別子を判別し、その固有識別子に対応するキーフレーム情報をフォーマットデータベース14-4に保管する。このようにして、自律移動体612が検出した情報より固有識別子に対応するキーフレーム情報をフォーマットデータベース14-4に格納することが出来る。 In step S425, the autonomous mobile body 612 transmits the mutually associated (linked) key frame information and position information to the conversion information holding device 14. In step S426, the conversion information holding device 14 determines the unique identifier corresponding to the transmitted position information, and stores the key frame information corresponding to the unique identifier in the format database 14-4. In this way, the key frame information corresponding to the unique identifier can be stored in the format database 14-4 from the information detected by the autonomous mobile body 612. FIG.
 このようにして、フォーマットデータベース14-4内の所定の固有識別子に関連付けて(紐づけて)キーフレーム情報が格納される。ここで、本実施形態では、このキーフレーム情報は適切なタイミングで更新される。 In this way, the keyframe information is stored in association with (linked to) a predetermined unique identifier in the format database 14-4. Here, in this embodiment, this key frame information is updated at appropriate timing.
 例えば、地図情報から格納されたキーフレーム情報は地図情報が更新された時に更新されることが望ましい。又、例えば、ステップS424でセンサノード15によって格納されたキーフレーム情報は、センサノード15が検出可能な範囲の情報が変化し、その変化がキーフレーム情報に影響を与える変化である、と判断された場合に更新されることが望ましい。即ち、キーフレーム情報は、例えば、キーフレーム情報として格納されていた物体が移動した場合などは更新されることが望ましい。 For example, it is desirable that keyframe information stored from map information be updated when map information is updated. Further, for example, in the keyframe information stored by the sensor node 15 in step S424, it is determined that the information in the range detectable by the sensor node 15 changes and that change affects the keyframe information. It should be updated when That is, it is desirable that the keyframe information be updated, for example, when an object stored as keyframe information moves.
 又、自律移動体612によって格納されたキーフレーム情報は、同一地点に新しく別のキーフレーム情報が紐づけられて置き換わった場合などには、更新することが望ましい。但し、1つの分割空間に紐づけられるキーフレーム情報は複数であってもよい。この場合は、キーフレーム情報に新たにキーフレーム情報の要件を満たす物体を示す情報が追加される。 In addition, it is desirable to update the keyframe information stored by the autonomous mobile body 612 when, for example, the same point is replaced by another new keyframe information associated with it. However, a plurality of pieces of keyframe information may be linked to one divided space. In this case, information indicating an object that satisfies the requirements of the keyframe information is added to the keyframe information.
 次にGPS電波の受信強度が低いルートにおいて、キーフレーム情報を用いて自律移動体612が移動する際の過程を図16、図17を用いて説明する。GPS電波の受信強度が低いルートにおいては、自律移動体612が本来有するGPSを用いた自己位置検出機能の精度は低下し、例えば±5m程度の誤差が発生するものとする。又、GPS電波の受信強度が低いルートにおいても地磁気センサを用いた方向検出機能に支障はなく、自律移動体612は地磁気の方向は正しく検知されるものとする。 Next, the process when the autonomous mobile body 612 moves using the key frame information on a route with low reception strength of GPS radio waves will be described using FIGS. 16 and 17. FIG. It is assumed that on a route where the reception strength of GPS radio waves is low, the accuracy of the self-position detection function using GPS that the autonomous mobile body 612 originally has decreases, and an error of about ±5 m occurs, for example. It is also assumed that the direction detection function using the geomagnetic sensor does not interfere with the route where the GPS radio wave reception strength is low, and the direction of the geomagnetism of the autonomous moving body 612 is correctly detected.
 図16は実施形態2に係る自律移動体612のフォーマット経路情報900のイメージ図である。図16のフォーマット経路情報900は、その経路上でキーフレーム情報が格納された分割空間902-1、902-2を通過するように設定されている。 FIG. 16 is an image diagram of the format route information 900 of the autonomous mobile body 612 according to the second embodiment. Formatted route information 900 in FIG. 16 is set so that the route passes through divided spaces 902-1 and 902-2 in which key frame information is stored.
 分割空間902-1に格納されたキーフレーム情報は「XX交差点」と記された案内標識を示す情報であり、分割空間902-2に格納されたキーフレーム情報は「YY郵便局」の看板を示す情報であるとする。又、自律移動体612及びは自律移動体612を制御するシステム制御装置10は、出発地においては、自律移動体612の位置を把握しているものとする。 The keyframe information stored in the divided space 902-1 is the information indicating the guide sign "XX intersection", and the keyframe information stored in the divided space 902-2 is the signboard of "YY post office". It is assumed that it is information that indicates It is also assumed that the autonomous mobile body 612 and the system control device 10 that controls the autonomous mobile body 612 know the position of the autonomous mobile body 612 at the departure point.
 図17は図16のフォーマット経路情報900を用いて移動する自律移動体612と自律移動体612を制御するシステム制御装置10の動作を説明するシーケンス図である。尚、10,12の各制御部内のコンピュータがメモリに記憶されたコンピュータプログラムを実行することによって図17のシーケンスの各ステップの動作が行われる。 FIG. 17 is a sequence diagram explaining the operation of the autonomous mobile body 612 that moves using the format route information 900 of FIG. 16 and the system control device 10 that controls the autonomous mobile body 612. Each step of the sequence of FIG. 17 is performed by the computer in each of the control units 10 and 12 executing the computer program stored in the memory.
 先ず、ステップS431において、システム制御装置10は図16で示した出発地の自律移動体612に対して図16の紙面右方向に移動開始するように指示を出す。 First, in step S431, the system control device 10 instructs the autonomous mobile body 612 at the starting point shown in FIG. 16 to start moving rightward on the page of FIG.
 指示を受けた自律移動体612は、ステップS432において、地磁気センサを用いた方向検出機能から得られる情報を基に、図16の紙面右方向に移動開始する。 In step S432, the autonomous mobile body 612 that has received the instruction starts moving rightward on the paper surface of FIG. 16 based on the information obtained from the direction detection function using the geomagnetic sensor.
 その後ステップS433において、自律移動体612は分割空間902-1付近±5m程度の位置に到達したとき、自律移動体612が有する撮像手段によって、「XX交差点」と記された案内標識を検出する。そして、検出された案内標識をもとに分割空間902-1に格納されたキーフレーム情報を照合する。 After that, in step S433, when the autonomous mobile body 612 reaches a position of about ±5 m near the divided space 902-1, the imaging means of the autonomous mobile body 612 detects the guide sign "XX intersection". Based on the detected guide sign, the key frame information stored in the divided space 902-1 is collated.
 更にステップS434において、自律移動体612は有する測距機能によって、検出した「XX交差点」と記された案内標識までの距離を算出する。続けてステップS435において、自律移動体612は距離の算出結果に基づき分割空間902-1から自身の相対的な位置情報を算出する。そしてステップS436において、この位置情報を基にフォーマット経路情報900上の自己の位置を、自己が作成したサイバー空間の三次元マップに反映する。 Furthermore, in step S434, the autonomous mobile body 612 calculates the distance to the detected guide sign marked "XX intersection" using the distance measuring function. Subsequently, in step S435, the autonomous mobile body 612 calculates its own relative position information from the divided space 902-1 based on the distance calculation result. Then, in step S436, based on this position information, the self position on the format route information 900 is reflected in the three-dimensional map of the cyberspace created by the self.
 更に自律移動体612は、ステップS437において、この位置情報をシステム制御装置10にも送信する。システム制御装置10はステップS438において、自律移動体612から受け取った位置情報を基に、フォーマット経路情報900上の自律移動体612の現在位置を把握する。尚、ステップS433~ステップS438の処理は自律移動体612が分割空間902-1に到達するまで連続的に行わる。 Furthermore, the autonomous mobile body 612 also transmits this position information to the system control device 10 in step S437. In step S438, the system control device 10 grasps the current position of the autonomous mobile body 612 on the format route information 900 based on the position information received from the autonomous mobile body 612. FIG. The processing of steps S433 to S438 is continuously performed until the autonomous mobile body 612 reaches the divided space 902-1.
 自律移動体612が分割空間902-1に到達すると、ステップS439において、自律移動体612は、ステップS436で作成したサイバー空間の三次元マップと方向検出機能を基に左折(図16の紙面上方向へ移動)する。 When the autonomous mobile body 612 reaches the divided space 902-1, in step S439, the autonomous mobile body 612 turns left based on the three-dimensional map of cyber space created in step S436 and the direction detection function (upward direction on the paper surface of FIG. 16 to).
 次にステップS440において、自律移動体612は分割空間902-2付近±5m程度の位置に到達したとき、自律移動体612が有する撮像手段によって、「YY郵便局」の看板を検出する。そして、自律移動体612は、検出された看板をもとに分割空間902-1に格納されたキーフレーム情報を照合する。 Next, in step S440, when the autonomous mobile body 612 reaches a position of about ±5 m near the divided space 902-2, the imaging means of the autonomous mobile body 612 detects the "YY Post Office" signboard. Then, the autonomous moving body 612 collates the keyframe information stored in the divided space 902-1 based on the detected signboard.
 更にステップS441において、自律移動体612は、測距機能によって、検出した「YY郵便局」の看板までの距離を算出する。続けて自律移動体612は、ステップS442において、距離の算出結果から分割空間902-2から自身の相対的な位置情報を算出する。そして、ステップS443において、自己が作成したサイバー空間の三次元マップに反映し、ステップS444においてシステム制御装置10にも送信する。 Furthermore, in step S441, the autonomous moving body 612 calculates the distance to the detected "YY Post Office" signboard by the distance measurement function. Subsequently, in step S442, the autonomous mobile body 612 calculates its own relative position information from the divided space 902-2 based on the distance calculation result. Then, in step S443, it is reflected in the three-dimensional map of the cyberspace created by itself, and is also transmitted to the system control device 10 in step S444.
 システム制御装置10は、ステップS445において、自律移動体612から受け取った、分割空間902-2から自律移動体612までの相対的な位置情報を基に、フォーマット経路情報900上の自律移動体612の現在位置を把握する。
 ステップS440~ステップS445の処理も、自律移動体612がキーフレーム情報の地点に到着するまで連続的に行われる。
In step S445, based on the relative position information from the divided space 902-2 to the autonomous mobile body 612 received from the autonomous mobile body 612, the system control device 10 of the autonomous mobile body 612 on the format route information 900 Know your current location.
The processing of steps S440 to S445 is also continuously performed until the autonomous mobile body 612 arrives at the point of the key frame information.
 自律移動体612がキーフレーム情報の地点に到着したとき、自律移動体612及びシステム制御装置10は、自律移動体612がフォーマット経路情報900上の分割空間902-2に到達、即ち到着地まで到達したことを把握することができる。そして、システム制御装置10は自律移動体612に対して移動終了の指示を出す。そしてステップS447において、自律移動体612は移動を終了する。 When the autonomous mobile body 612 arrives at the point of the key frame information, the autonomous mobile body 612 and the system controller 10 determine that the autonomous mobile body 612 has reached the divided space 902-2 on the format route information 900, that is, has reached the destination. you can figure out what you did. Then, the system control device 10 issues an instruction to end the movement to the autonomous mobile body 612 . Then, in step S447, the autonomous mobile body 612 ends its movement.
 次に、どのような情報がキーフレーム情報として利用可能なのか、について説明する。今まで、キーフレーム情報の例として「周辺に同一のものが存在しない物体を示す情報」と説明した。周辺に同一のものが存在しない物体、とは例えば、固有名詞を含む物体などである。例に挙げた「XX交差点」や「YY郵便局」などの固有名詞が書かれた案内標識や看板などは、当該地域においては、その地点のみに存在すると考えられる。 Next, we will explain what kind of information can be used as keyframe information. So far, "information indicating an object that does not have the same object in its surroundings" has been described as an example of key frame information. An object around which the same object does not exist is, for example, an object including a proper noun. Guide signs and signboards on which proper nouns such as "XX intersection" and "YY post office" are written are considered to exist only at that point in the area.
 従って、固有名詞を検出できれば、GPSなどの自己位置検出機能を有していない自律移動体であっても、その地点においては正確に自己位置を特定することが可能となる。 Therefore, if a proper noun can be detected, even an autonomous mobile body that does not have a self-position detection function such as GPS can accurately identify its position at that point.
 なお、例えば、渋谷駅前の忠犬ハチ公像などの像(彫像や塑像も含む)および東京タワーなどの特徴的な建造物等の物体もキーフレーム情報の要件を満たす物体である。また、単に「ポスト」や「信号機」などの同様の形状をした物体が3次元空間に点在するような物体であってもキーフレーム情報の要件を満たす物体となりうる。 For example, objects such as statues (including statues and sculptures) such as the statue of Hachiko, the faithful dog in front of Shibuya Station, and characteristic buildings such as Tokyo Tower are also objects that satisfy the keyframe information requirements. In addition, even an object such as a "post" or a "traffic light" having similar shapes scattered in a three-dimensional space can be an object that satisfies the requirements of key frame information.
 例えば、「ポスト」は、日本では、あるポストを中心とした半径400m以内に別のポストがない地点に設置される。そのため、自律移動体が有するGPSを用いた自己位置検出機能の精度が、±100m程度の誤差があったとしても、別のポストと混同することはない。 For example, in Japan, a "post" is installed at a point where there is no other post within a radius of 400m from a certain post. Therefore, even if there is an error of about ±100 m in the accuracy of the self-position detection function using GPS that the autonomous mobile body has, it will not be confused with another post.
 又、そもそも自己位置検出機能を有していない自律移動体であっても、前後で検出した別のキーフレーム情報と複合的に判断することで、別のポストとの混同を避けることができる場合がある。 Also, even if the autonomous mobile body does not have a self-position detection function in the first place, it is possible to avoid confusion with another post by making a composite judgment with other key frame information detected before and after. There is
 例えば、移動速度4km/hの、ある自律移動体が、その地点(A地点とする)を完全に特定可能なキーフレーム情報を照合した3分後に「ポスト」というキーフレーム情報を検出したとする。この移動体が3分で移動できる距離は200mであり、A地点から200mの範囲にあるポストが1つしかなければ、その地点を完全に特定できたことになる。 For example, suppose that an autonomous mobile object with a moving speed of 4 km/h detects the key frame information "post" 3 minutes after collating the key frame information that can completely identify the point (point A). . The distance that this moving body can move in 3 minutes is 200 m, and if there is only one post within the range of 200 m from point A, that point can be completely identified.
 「完全にはその地点の特定ができない物体の存在情報」は使用する側の自己位置検出機能や、使い方に依存する。そのため自律移動体は、そのキーフレーム情報が自身で使用可能な情報かどうか判断する必要がある。そのため「完全にはその地点の特定ができない物体の存在情報」と「完全にその地点の特定ができる物体の存在情報」は同じキーフレーム情報であっても分類されていることが望ましい。 "Existence information of objects whose point cannot be specified completely" depends on the self-location detection function of the user and how to use it. Therefore, the autonomous mobile body needs to determine whether the key frame information is information that can be used by itself. Therefore, it is desirable that "existence information of an object whose point cannot be completely specified" and "existence information of an object whose point can be specified completely" are classified even if they are the same key frame information.
 例えば、「完全にその地点の特定ができる物体の存在情報」はクラスAのキーフレーム情報、「±10kmの範囲には同一のものが存在しない物体の存在情報」はクラスBのキーフレーム情報といったクラス情報を付随させる。 For example, "existence information of an object whose point can be completely specified" is class A keyframe information, and "existence information of an object that does not have the same object within a range of ±10 km" is class B keyframe information. Attach class information.
 又、「±200mの範囲には同一のものが存在しない物体の存在情報」はクラスCのキーフレーム情報、といったクラス情報を付随させる。それによって、使用可能な情報かどうか判断する際の負担を軽減することが可能となる。 In addition, class information such as "existence information of an object that does not have the same object within a range of ±200 m" is associated with class C key frame information. This makes it possible to reduce the burden of determining whether the information is usable.
 更に、キーフレーム情報の要件を満たす物体の情報の例として挙げられる別の例は、自律移動体612等によって撮影された「その地点の風景の画像データ」である。即ち、キーフレーム情報は、空間に対応した地点で撮影された画像データを含む。 Another example of object information that satisfies the requirements for keyframe information is "image data of the landscape at that point" captured by the autonomous mobile body 612 or the like. That is, the keyframe information includes image data captured at a point corresponding to the space.
 この画像データを自律移動体の撮像手段等で目の前の実際の風景と比較し、特徴点の一致割合が一定以上かどうか解析行うことができれば、そのキーフレーム情報を照合、つまりその地点を特定することが可能となる。 This image data is compared with the actual scenery in front of the eyes by means of imaging means of an autonomous mobile body, etc., and if it is possible to analyze whether the matching rate of the feature points is above a certain level, the key frame information is collated. can be specified.
 但し、同じ地点の画像データであっても、撮影方向や、撮影時間、撮影機材等の条件が大きく異なる場合、特徴点の一致割合は低下し、照合することは難しくなる。そこで、その画像を撮影した際の撮影条件に関するパラメータデータを「その地点の風景の画像データ」に紐づけてフォーマットデータベース14-4に格納することが望ましい。即ち、パラメータデータは、対象となる空間で画像データを撮影した際の撮影条件を含む。 However, even if the image data is for the same point, if the shooting direction, shooting time, shooting equipment, and other conditions are significantly different, the matching rate of the feature points will decrease, making it difficult to match. Therefore, it is desirable to link the parameter data relating to the photographing conditions under which the image was photographed to the "image data of the landscape at that point" and store it in the format database 14-4. That is, the parameter data includes shooting conditions when image data is shot in the target space.
 そしてパラメータデータは、その画像を撮影した地点の位置情報を含む。どの地点から撮影した画像データか分かれば、例えば自律移動体側がその方向に回り込むなどの動作を加えたり、カメラを回転することで、特徴点の一致割合を向させ、照合の難易度を下げることができるからである。 And the parameter data includes the position information of the point where the image was taken. If you know from which point the image data was captured, for example, by adding an action such as turning the autonomous mobile side in that direction or rotating the camera, the matching rate of the feature points can be adjusted and the difficulty of matching can be lowered. This is because
 又、その画像データを撮影した際の照明の状態に関する情報も、パラメータデータとなりうる。即ち、撮影条件はその撮影をした際の明るさに関する情報を含む。撮影した際の照明の状態に関する条件は、画像全体の画質や色味、画像内の物体のエッジや影などに大きな影響を与えるためである。 Also, information about the lighting conditions when the image data was shot can be parameter data. That is, the shooting conditions include information about the brightness at the time of shooting. This is because the lighting conditions at the time of photographing greatly affect the image quality and color of the entire image, and the edges and shadows of objects in the image.
 ここで、照明の状態とは、例えば、撮影日時、撮影時の日の光はどのくらいの強さでどちらの方向から射していたか、近くに街頭などの光源はあったか、街灯があった場合それはどちらの方向からか、色(色温度)や照度はどの位か、の少なくとも1つを含む。 Here, the state of lighting refers to, for example, the date and time when the photograph was taken, how strong the light was at the time of photographing and from which direction, whether there was a light source such as a street nearby, and if there was a streetlight, At least one of from which direction, color (color temperature) and illuminance is included.
 これらの条件をパラメータデータとして格納することで、例えば自律移動体で撮影した画像を、パラメータデータの条件に近づけるよう画像補正などの処理を行うことができ、照合度を上げることができるからである。 By storing these conditions as parameter data, for example, an image captured by an autonomous mobile body can be processed such as image correction so as to be closer to the conditions of the parameter data, and the degree of matching can be increased. .
 又、撮影した撮影機材に関する情報もパラメータデータとして含めることが望ましい。例えば撮影した自律移動体612の撮影機材のIDや光学特性に関するデータやホワイトバランスなどの設定情報を事前に準備できれば、自律移動体で撮影した画像に対して補正処理を行うことができる。  In addition, it is desirable to include information on the shooting equipment used as parameter data. For example, if setting information such as the ID of the photographing equipment of the autonomous mobile body 612 that has been photographed, data regarding optical characteristics, and white balance can be prepared in advance, correction processing can be performed on the image photographed by the autonomous mobile body.
 それにより、照合度を上げることができるからである。またこれらのパラメータデータを有効に利用するため、撮影機材名、光学的な特性、撮影時の画質やゲインやホワイトバランスの設定情報に関するデータテーブルを事前に準備しておくことが望ましい。 This is because the degree of matching can be increased. Also, in order to effectively use these parameter data, it is desirable to prepare in advance a data table relating to the name of shooting equipment, optical characteristics, image quality at the time of shooting, gain, and white balance setting information.
 尚、上記のようなパラメータデータを正確な形で収集することが困難な場合には、パラメータデータを推定値で格納しても良い。推定値とは例えば、明るさに関する推定値である。別の地点のキーフレーム情報のパラメータデータとして収集された明るさのデータの中から、例えば撮影時間や地形等の条件が近いものを格納することで、簡易的なパラメータデータとして使用しても良い。 If it is difficult to collect the above parameter data in an accurate form, the parameter data may be stored as estimated values. An estimated value is, for example, an estimated value related to brightness. Among the brightness data collected as parameter data of key frame information of different points, data with similar conditions such as shooting time and topography may be stored and used as simple parameter data. .
 更に、キーフレーム情報の要件を満たす物体の情報の他の例は、その地点の、或いはその地点から見た風景の3Dデータである。自律移動体が、例えばLiDAR(Light Detection And Ranging)等の、対象物の形状を判別可能な機能を有していれば、キーフレーム情報として格納された3Dデータを照合することが可能となる。即ち、キーフレーム情報は、対象となる空間に対応した地点での3Dデータを含む。 Furthermore, another example of object information that satisfies the requirements for keyframe information is 3D data of the landscape at or from that point. If the autonomous mobile body has a function that can determine the shape of the object, such as LiDAR (Light Detection And Ranging), it will be possible to collate the 3D data stored as key frame information. That is, the keyframe information includes 3D data at points corresponding to the target space.
 例えば、格納されたキーフレーム情報である3Dデータと、自身がLiDARによって測距した結果を比較し、その特徴点の一致割合が一定以上かどうか解析すれば、そのキーフレーム情報を照合でき、その地点を特定することが可能となる。キーフレーム情報として格納する3Dデータは、その地点すべてのデータでなく、解析が行いやすい特徴的な物体の形状のみであっても良い。 For example, by comparing the 3D data, which is the stored keyframe information, with the result of distance measurement by LiDAR, and analyzing whether the matching rate of the feature points is above a certain level, the keyframe information can be verified. It becomes possible to specify the location. The 3D data stored as key frame information may be only the shape of a characteristic object that is easy to analyze, instead of the data of all the points.
 尚、LiDARはその測距性能が対象物の表面状態に左右されるという特徴がある。そのため、パラメータデータとして、キーフレーム情報である3Dデータ内の特徴的な形状を有する物体の表面情報を3Dデータに紐づけて格納することで、照合度を上げることができる。 It should be noted that LiDAR has the feature that its ranging performance depends on the surface condition of the target. Therefore, by storing the surface information of the object having a characteristic shape in the 3D data, which is the key frame information, as the parameter data in association with the 3D data, the degree of matching can be increased.
 即ち、フォーマット化手段は、空間情報として、キーフレーム情報を照合するためのパラメータデータを、キーフレーム情報と紐づけて登録可能とすることで照合度を向上できる。又、パラメータデータは3Dデータの物体の表面情報を含む。尚、ここでのパラメータデータも正確な形で収集することが困難な場合、パラメータデータは、物体の形状や地形等の条件から推測される推定値であっても良い。 That is, the formatting means can improve the degree of matching by enabling parameter data for matching key frame information to be registered as spatial information in association with key frame information. The parameter data also includes surface information of the 3D data object. If it is difficult to collect the parameter data here in an accurate form, the parameter data may be estimated values estimated from conditions such as the shape of the object and topography.
 キーフレーム情報の照合の難易度は、パラメータデータに依存する部分が大きいが、パラメータデータが推定値であった場合、そのキーフレーム情報の照合する難易度を事前に正確に見積もることは難しくなる。 The difficulty of matching keyframe information largely depends on the parameter data, but if the parameter data is an estimated value, it is difficult to accurately estimate the difficulty of matching the keyframe information in advance.
 自律移動体が自身の検出能力や解析能力では照合できないキーフレーム情報を照合可能と誤解して、実際その地点に到達した後に現在位置を見失ってしまうという事態は避けなければいけない。そのため、そのキーフレーム情報を自身で照合可能か、という照合難易度は、ルート設定の段階では高めに見積もることが望ましい。 It is necessary to avoid a situation in which an autonomous mobile body misunderstands that it can collate keyframe information that cannot be collated with its own detection and analysis capabilities, and loses sight of its current position after reaching that point. Therefore, it is desirable to estimate the degree of difficulty of collation, which is whether the key frame information can be collated by itself, at a high level at the route setting stage.
 画像データは一般的に明るい条件よりも暗い条件で撮影された場合、認識しずらい画像となる場合が多い。そのため、画像データをキーフレーム情報として使用し、パラメータデータを推定として格納する場合、そのパラメータデータに複数の候補がある場合、より明るい条件のパラメータデータを推定値として採用することが望ましい。 In general, when image data is shot under darker conditions than bright conditions, it is often difficult to recognize the image. Therefore, when image data is used as keyframe information and parameter data is stored as an estimate, if there are multiple candidates for the parameter data, it is desirable to adopt parameter data under brighter conditions as the estimated value.
 またパラメータデータは正確な値が新たに収集できた場合には、推定値から、この新たに収集した値に更新することが望ましい。 Also, when accurate values can be newly collected for parameter data, it is desirable to update the estimated values to the newly collected values.
 又、3Dデータをキーフレーム情報として使用し、パラメータデータを推定して格納する場合に、その推定が困難なときは、LiDAR等での測距が困難とされる、低反射率の物体の値、例えば黒紙の反射率である10%、などとする。それにより、自律移動体側での照合難易度を高めに見積もることができる。 In addition, when 3D data is used as key frame information and parameter data is estimated and stored, if the estimation is difficult, the value of an object with low reflectance that is difficult to measure with LiDAR etc. , for example, 10%, which is the reflectance of black paper. As a result, it is possible to overestimate the degree of matching difficulty on the autonomous mobile side.
 以上の実施形態によれば、例えば緯度/経度/高さ等の所定の座標系によって定義される3次元空間の空間情報を用いた自律移動体制御システムの安全性を効率的に高めることができる。 According to the above embodiments, it is possible to efficiently improve the safety of an autonomous mobile body control system using spatial information of a three-dimensional space defined by a predetermined coordinate system such as latitude/longitude/height. .
 尚、上述の実施形態においては自律移動体に制御システムを適用した例について説明した。しかし、本実施形態の移動体は、AGV(Automated Guided Vehicle)やAMR(Autonomous Mobile Robot)などの自律移動体に限らない。 In addition, in the above-described embodiment, an example in which the control system is applied to an autonomous mobile body has been described. However, the mobile object of this embodiment is not limited to an autonomous mobile object such as an AGV (Automated Guided Vehicle) or an AMR (Autonomous Mobile Robot).
 例えば自動車、列車、船舶、飛行機、ロボット、ドローンなどの移動をする移動装置であればどのようなものであってもよい。また、本実施形態の制御システムは一部がそれらの移動体に搭載されていても良いし、搭載されていなくても良い。又、移動体をリモートでコントロールする場合にも本実施形態を適用することができる。 For example, it can be any mobile device that moves, such as automobiles, trains, ships, airplanes, robots, and drones. Also, part of the control system of the present embodiment may or may not be mounted on those moving bodies. Also, this embodiment can be applied to remote control of a moving object.
 以上、本発明をその好適な実施形態に基づいて詳述してきたが、本発明は上記実施形態に限定されるものではなく、本発明の趣旨に基づき種々の変形が可能であり、それらを本発明の範囲から除外するものではない。尚、本発明は上記の複数の実施形態の組み合わせを含む。 Although the present invention has been described in detail based on its preferred embodiments, the present invention is not limited to the above embodiments, and various modifications are possible based on the gist of the present invention. They are not excluded from the scope of the invention. It should be noted that the present invention includes combinations of the multiple embodiments described above.
 尚、本発明は、前述した実施形態の機能を実現するソフトウェアのプログラムコード(制御プログラム)を記録した記憶媒体を、システムあるいは装置に供給することによって実現してもよい。そして、そのシステムあるいは装置のコンピュータ(又はCPUやMPU)が記憶媒体に格納されたコンピュータ読取可能なプログラムコードを読み出し実行することによっても達成される。その場合、記憶媒体から読み出されたプログラムコード自体が前述した実施形態の機能を実現することになり、そのプログラムコードを記憶した記憶媒体は本発明を構成することになる。 The present invention may be realized by supplying a storage medium recording software program code (control program) for realizing the functions of the above-described embodiments to a system or device. It is also achieved by the computer (or CPU or MPU) of the system or apparatus reading and executing the computer-readable program code stored in the storage medium. In that case, the program code itself read from the storage medium implements the functions of the above-described embodiments, and the storage medium storing the program code constitutes the present invention.
(関連出願の相互参照)
 本出願は、先に出願された、2022年2月1日に出願された日本特許出願第2022-014166号、2022年7月8日に出願された日本特許出願第2022-110828号、2023年1月11日に出願された日本特許出願第2023-002447号の利益を主張するものである。また、上記日本特許出願の内容は本明細書において参照によりその全体が本明細書に組み込まれる。
 
 

 
(Cross reference to related applications)
This application is the previously filed Japanese Patent Application No. 2022-014166 filed on February 1, 2022, Japanese Patent Application No. 2022-110828 filed on July 8, 2022, 2023 It claims the benefit of Japanese Patent Application No. 2023-002447 filed on January 11th. Also, the contents of the above Japanese patent application are incorporated herein by reference in their entirety.



Claims (17)

  1.  所定の座標系によって定義される3次元の空間に固有識別子を付与し、前記空間に存在する物体の状態と時間に関する空間情報を前記固有識別子と関連付けてフォーマット化して保存するフォーマット化手段を有し、
     前記フォーマット化手段は、前記空間情報として、キーフレーム情報を登録することを特徴とする制御システム。
    A formatting means for assigning a unique identifier to a three-dimensional space defined by a predetermined coordinate system, and for formatting and storing spatial information relating to the state and time of an object existing in the space in association with the unique identifier. ,
    A control system according to claim 1, wherein said formatting means registers key frame information as said spatial information.
  2.  前記キーフレーム情報は、前記空間に対応した地点の周辺に同一のものが存在しない物体の存在情報を含むことを特徴とする請求項1に記載の制御システム。  The control system according to claim 1, wherein the key frame information includes existence information of an object that does not have the same object around a point corresponding to the space.
  3.  前記キーフレーム情報は、前記空間に対応した地点で撮影された画像データを含むことを特徴とする請求項1に記載の制御システム。  The control system according to claim 1, wherein the key frame information includes image data taken at a point corresponding to the space.
  4.  前記キーフレーム情報は、前記空間に対応した地点での3Dデータを含むことを特徴とする請求項1に記載の制御システム。  The control system according to claim 1, wherein the key frame information includes 3D data at a point corresponding to the space.
  5.  前記フォーマット化手段は、前記空間情報として、前記キーフレーム情報を照合するためのパラメータデータを、前記キーフレーム情報と紐づけて登録可能であることを特徴とする請求項1に記載の制御システム。 The control system according to claim 1, wherein the formatting means can register, as the spatial information, parameter data for matching the key frame information in association with the key frame information.
  6.  前記キーフレーム情報は、前記空間に対応した地点で撮影された画像データを含み、
     前記パラメータデータは、前記画像データを撮影した際の撮影条件を含むことを特徴とする請求項5に記載の制御システム。
    The key frame information includes image data taken at a point corresponding to the space,
    6. The control system according to claim 5, wherein said parameter data includes photographing conditions under which said image data was photographed.
  7.  前記撮影条件は、前記撮影をした地点の位置情報を含むことを特徴とする請求項6に記載の制御システム。  The control system according to claim 6, wherein the shooting conditions include position information of the spot where the shooting was performed.
  8.  前記撮影条件は前記撮影をした際の明るさに関する情報を含むことを特徴とする請求項6に記載の制御システム。  The control system according to claim 6, wherein the shooting conditions include information about brightness at the time of shooting.
  9.  前記撮影条件は前記撮影をした撮影機材に関する情報を含むことを特徴とする請求項6に記載の制御システム。  The control system according to claim 6, wherein the shooting conditions include information about the shooting equipment used for the shooting.
  10.  前記キーフレーム情報は、前記空間に対応した地点の3Dデータを含み、
     前記パラメータデータは、前記3Dデータの物体の表面情報を含むことを特徴とする請求項5に記載の制御システム。
    The key frame information includes 3D data of points corresponding to the space,
    6. The control system of claim 5, wherein the parameter data includes surface information of objects in the 3D data.
  11.  前記パラメータデータは、推定値を含むことを特徴とする請求項5に記載の制御システム。 The control system according to claim 5, wherein the parameter data includes estimated values.
  12.  前記フォーマット化手段から取得した前記空間情報と、移動体の種別情報に基づき前記移動体の移動経路に関する経路情報を生成する経路生成手段を有することを特徴とする請求項1に記載の制御システム。 3. The control system according to claim 1, further comprising route generating means for generating route information relating to the movement route of said moving body based on said spatial information obtained from said formatting means and type information of said moving body.
  13.  前記フォーマット化手段は、前記空間情報の更新間隔に関する情報を前記固有識別子と関連付けてフォーマット化することを有することを特徴とする請求項1に記載の制御システム。 The control system according to claim 1, characterized in that said formatting means formats the information on update intervals of said spatial information in association with said unique identifier.
  14.  前記更新間隔に関する情報は、前記空間に存在する物体の種類に応じて異なることを特徴とする請求項13に記載の制御システム。 14. The control system according to claim 13, wherein the information about the update interval differs depending on the type of object existing in the space.
  15.  前記更新間隔に関する情報は、前記空間に存在する物体の種類が移動体の場合には、前記空間に存在する物体の種類が移動体でない場合よりも短いことを特徴とする請求項14に記載の制御システム。 15. A method according to claim 14, wherein the information about the update interval is shorter when the type of object existing in the space is a moving object than when the type of object existing in the space is not a moving object. control system.
  16.  所定の座標系によって定義される3次元の空間に固有識別子を付与し、前記空間に存在する物体の状態と時間に関する空間情報を前記固有識別子と関連付けてフォーマット化して保存するフォーマット化ステップを有し、
     前記フォーマット化ステップは、前記空間情報として、キーフレーム情報を登録可能であることを特徴とする制御方法。
    A formatting step of assigning a unique identifier to a three-dimensional space defined by a predetermined coordinate system, and formatting and storing spatial information about the state and time of an object existing in the space in association with the unique identifier. ,
    The control method, wherein the formatting step can register key frame information as the spatial information.
  17.  以下の制御方法の各工程を実行させるためのコンピュータプログラムを記憶した記憶媒体であって、制御方法は、
     所定の座標系によって定義される3次元の空間に固有識別子を付与し、前記空間に存在する物体の状態と時間に関する空間情報を前記固有識別子と関連付けてフォーマット化して保存するフォーマット化ステップを有し、
     前記フォーマット化ステップは、前記空間情報として、キーフレーム情報を登録可能であることを特徴とする。
     
    A storage medium storing a computer program for executing each step of the following control method, the control method comprising:
    A formatting step of assigning a unique identifier to a three-dimensional space defined by a predetermined coordinate system, and formatting and storing spatial information about the state and time of an object existing in the space in association with the unique identifier. ,
    The formatting step is characterized in that key frame information can be registered as the spatial information.
PCT/JP2023/002676 2022-02-01 2023-01-27 Control system, control method, and storage medium WO2023149376A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2022-014166 2022-02-01
JP2022014166 2022-02-01
JP2022-110828 2022-07-08
JP2022110828 2022-07-08
JP2023002447A JP2023112670A (en) 2022-02-01 2023-01-11 Control system, control method, and computer program
JP2023-002447 2023-01-11

Publications (1)

Publication Number Publication Date
WO2023149376A1 true WO2023149376A1 (en) 2023-08-10

Family

ID=87552335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/002676 WO2023149376A1 (en) 2022-02-01 2023-01-27 Control system, control method, and storage medium

Country Status (1)

Country Link
WO (1) WO2023149376A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013207418A (en) * 2012-03-27 2013-10-07 Zenrin Datacom Co Ltd Imaging system and imaging management server
JP2017207696A (en) * 2016-05-20 2017-11-24 アルパイン株式会社 Electronic system
WO2018083999A1 (en) * 2016-11-01 2018-05-11 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Display method and display device
JP2020178311A (en) * 2019-04-22 2020-10-29 キヤノン株式会社 Communication device, control method, and program
JP2021103091A (en) * 2019-12-24 2021-07-15 トヨタ自動車株式会社 Route retrieval system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013207418A (en) * 2012-03-27 2013-10-07 Zenrin Datacom Co Ltd Imaging system and imaging management server
JP2017207696A (en) * 2016-05-20 2017-11-24 アルパイン株式会社 Electronic system
WO2018083999A1 (en) * 2016-11-01 2018-05-11 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Display method and display device
JP2020178311A (en) * 2019-04-22 2020-10-29 キヤノン株式会社 Communication device, control method, and program
JP2021103091A (en) * 2019-12-24 2021-07-15 トヨタ自動車株式会社 Route retrieval system

Similar Documents

Publication Publication Date Title
US20230417558A1 (en) Using high definition maps for generating synthetic sensor data for autonomous vehicles
US11867515B2 (en) Using measure of constrainedness in high definition maps for localization of vehicles
US20200401823A1 (en) Lidar-based detection of traffic signs for navigation of autonomous vehicles
US11768863B2 (en) Map uncertainty and observation modeling
US11501104B2 (en) Method, apparatus, and system for providing image labeling for cross view alignment
US20210004021A1 (en) Generating training data for deep learning models for building high definition maps
CN111656135A (en) Positioning optimization based on high-definition map
EP2726819B1 (en) Methods and systems for obtaining navigation instructions
US20210001891A1 (en) Training data generation for dynamic objects using high definition map data
US11699246B2 (en) Systems and methods for validating drive pose refinement
EP3644013A1 (en) Method, apparatus, and system for location correction based on feature point correspondence
GB2413021A (en) Navigation aid
EP4202835A1 (en) Method, apparatus, and system for pole extraction from optical imagery
WO2023149376A1 (en) Control system, control method, and storage medium
JP2023112670A (en) Control system, control method, and computer program
KR20030084855A (en) Method for implementing Video GIS system of car navigation system having GPS receiver
WO2023149370A1 (en) Control system, control method, and storage medium
WO2023149349A1 (en) Control system, control method, and storage medium
WO2023149358A1 (en) Control system, control method, and storage medium
WO2023149264A1 (en) Control system, control method, and storage medium
JP2023112666A (en) Control system, control method, and computer program
WO2023149288A1 (en) Information processing device, information processing method, and storage medium
WO2023149353A1 (en) Control system, control method, and storage medium
WO2023149308A1 (en) Control system, control method, and storage medium
WO2023149319A1 (en) Autonomous mobile body control system, and control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23749692

Country of ref document: EP

Kind code of ref document: A1