WO2022071315A1 - Autonomous moving body control device, autonomous moving body control method, and program - Google Patents

Autonomous moving body control device, autonomous moving body control method, and program Download PDF

Info

Publication number
WO2022071315A1
WO2022071315A1 PCT/JP2021/035631 JP2021035631W WO2022071315A1 WO 2022071315 A1 WO2022071315 A1 WO 2022071315A1 JP 2021035631 W JP2021035631 W JP 2021035631W WO 2022071315 A1 WO2022071315 A1 WO 2022071315A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
area
boundary
distance
target
Prior art date
Application number
PCT/JP2021/035631
Other languages
French (fr)
Japanese (ja)
Inventor
龍介 宮本
美穂 安達
Original Assignee
学校法人明治大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 学校法人明治大学 filed Critical 学校法人明治大学
Priority to JP2022554008A priority Critical patent/JPWO2022071315A1/ja
Publication of WO2022071315A1 publication Critical patent/WO2022071315A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present invention relates to an autonomous mobile control device, an autonomous mobile control method, and a program.
  • an object of the present invention is to provide a technique for improving the accuracy of movement of an autonomous moving body.
  • One aspect of the present invention is a region boundary distance for acquiring area boundary distance information, which is information indicating a distance from an autonomous moving body to be controlled to each position on the boundary of a target area, which is a region where the autonomous moving body is located.
  • area boundary distance information which is information indicating a distance from an autonomous moving body to be controlled to each position on the boundary of a target area, which is a region where the autonomous moving body is located.
  • the target information which is the information indicating the relationship between the autonomous moving body and the target area is obtained.
  • One aspect of the present invention is the autonomous movement control device, wherein the target information acquisition unit is different from a map graph expressing the reference area information and a map graph expressing the area boundary distance information.
  • the process of determining the condition for minimizing a certain error is executed, and the target information is acquired based on the condition of the execution result.
  • One aspect of the present invention is the autonomous movement control device, wherein the reference area information changes based on at least a parameter representing the state of the target area as seen from the autonomous moving body.
  • One aspect of the present invention is the autonomous movement control device, wherein the reference area information is obtained by a photographing device that runs parallel to the autonomous moving body and faces the direction of the autonomous moving body in the boundary of the target area. Contains information indicating the location of unphotographed boundaries.
  • One aspect of the present invention is the autonomous movement control device, wherein the target information acquisition unit determines a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information.
  • the target information is acquired by executing the conversion process, and the error minimization process is a reference area expression function which is a function representing the position, orientation, and shape of the target area and has one or a plurality of parameters.
  • the reference area information using one or more of the above is used.
  • One aspect of the present invention is the above-mentioned autonomous movement control device, in which the area boundary distance information acquisition unit acquires the area boundary distance information after deleting the information of the dynamic obstacle.
  • the target information acquisition unit acquires the target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information.
  • the autonomous moving body from the autonomous moving body to be controlled to the reference area information indicating the position, orientation, and shape candidate of the target area, which is the area where the autonomous moving body is located, and each position on the boundary of the target area.
  • It is an autonomous moving body control method having a target information acquisition step of acquiring target information which is information indicating the relationship between the autonomous moving body and the target area based on the area boundary distance information which is the information indicating the distance of the above. ..
  • One aspect of the present invention is a program for operating a computer as the above-mentioned autonomous mobile control device.
  • An explanatory diagram illustrating an outline of the autonomous mobile control device 1 of the embodiment The figure which shows an example of the processing target image in an embodiment.
  • FIG. 6 is an example of a result showing a bird's-eye view of the result of segmentation at an intersection and a result showing a visual field boundary distance in the embodiment.
  • the flowchart which shows an example of the flow of the process which the area boundary distance information acquisition part 102 acquires the area boundary distance information in embodiment.
  • the second figure which shows an example of the auxiliary point used for the formulation of the 2nd distance equation in an embodiment.
  • the figure which shows an example of the shape of the T-shaped road in an embodiment.
  • the figure which shows an example of the auxiliary point used for the formulation of the 3rd distance equation in an embodiment.
  • the first figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane.
  • the first figure which shows an example of the result estimated by the formula of a straight line, a right curve, and a left curve in an embodiment.
  • the second figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane.
  • the second figure which shows an example of the result estimated by the formula of a straight line, a right curve, and a left curve in an embodiment.
  • the first figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane.
  • the figure which shows an example of the execution result of the error minimization processing which ignores a dynamic obstacle by an autonomous mobile body control apparatus 1 when a part of a subject in a modification is a dynamic obstacle.
  • the flowchart which shows an example of the flow of the process which the autonomous mobile body control apparatus 1 executes when a part of a subject in a modification is a dynamic obstacle.
  • FIG. 3 shows the result of the first experiment conducted outdoors in the modified example.
  • the figure which shows the experimental environment of the 2nd experiment performed in the 1st room in the modified example The first figure which shows the result of the 2nd experiment performed in the 1st room in the modified example.
  • the figure which shows the experimental environment of the 2nd experiment performed in the 2nd indoor in the modified example The first figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modification.
  • the third figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modified example The first figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modification.
  • the figure which shows the experimental environment of the 3rd experiment in the modification The first figure which shows the result of the 3rd experiment in the modification.
  • the second figure which shows the result of the 3rd experiment in the modification The first figure which shows the accuracy of the measurement result of the inclination and the road width obtained based on the experimental result of the 1st experiment to the 3rd experiment.
  • FIG. 1 is an explanatory diagram illustrating an outline of the autonomous mobile control device 1 of the embodiment.
  • the autonomous mobile body control device 1 controls the movement of the autonomous mobile body 9 to be controlled.
  • the autonomous moving body 9 is a moving body that moves autonomously, such as a robot or an automobile that moves autonomously.
  • the autonomous mobile body control device 1 acquires area boundary distance information for each position of the autonomous mobile body 9, and based on the reference area information and the acquired area boundary distance information, the autonomous mobile body 9 and the target area Acquire information indicating the relationship (hereinafter referred to as "purpose information").
  • the target area means an area in the space where the autonomous mobile body 9 is located.
  • Region means a region in space. The region is, for example, a path on which the autonomous mobile 9 can travel.
  • the road 900 is an example of the target area.
  • the arrow 901 indicates the direction in which the autonomous mobile body 9 advances.
  • the photographing device 902 is a photographing device that runs in parallel with the autonomous moving body 9 and faces the same direction as the autonomous moving body 9.
  • the photographing apparatus 902 is, for example, 3DLiDAR (3dimensional Light Detection And Ringing).
  • the photographing device 902 may be a monocular camera.
  • the photographing device 902 may be provided by the autonomous moving body 9, or may be provided by another moving body such as a drone running in parallel with the autonomous moving body 9.
  • FIG. 1 shows a case where the autonomous moving body 9 includes a photographing device 902 as an example. Since the photographing device 902 runs in parallel with the autonomous moving body 9 and faces the direction of the autonomous moving body 9, the direction seen from the autonomous moving body 9 is the direction seen from the photographing device 902. Further, since the photographing device 902 runs in parallel with the autonomous moving body 9 and is at the same position or a certain distance as the autonomous moving body 9, the distance from the autonomous moving body 9 to the object is to the object seen from the photographing device 902. Is the distance.
  • the target information indicates, for example, where the autonomous mobile body 9 is located within the area width of the target area.
  • the target information indicates, for example, the relationship between the direction of the target area and the direction of the autonomous mobile body 9.
  • the target information indicates whether or not the target area is an intersection at the position of the autonomous mobile body 9, for example.
  • the target information indicates the direction of each region that intersects at the intersection, for example, when the target region is an intersection at the position of the autonomous mobile body 9.
  • the area boundary distance information is information indicating the distance from the autonomous moving body 9 to each position on the boundary of the target area (hereinafter referred to as "region boundary distance").
  • the area boundary distance information is located, for example, at the boundary of the road from the autonomous moving body 9 in each direction seen from the autonomous moving body 9 (hereinafter referred to as "line-of-sight direction") centered on the autonomous moving body 9. This is information indicating the distance to the subject as the area boundary distance.
  • the area boundary distance information is information displayed as a graph showing the area boundary distance for each line-of-sight direction, for example.
  • the subject is, for example, a shield.
  • the area boundary distance information is, for example, a measurement result by 3DLiDAR when the photographing device 902 is 3DLiDAR (3dimensional Light Detection And Ringing). That is, when the photographing apparatus 902 is a 3D LiDAR, the region boundary distance is the distance of the signal. When the photographing apparatus 902 is a 3D LiDAR, the line-of-sight direction is the direction seen from the 3D LiDAR.
  • the area boundary distance information may be acquired by calculation using a distance image obtained in advance and a machine learning result learned in advance, based on an image taken by a monocular camera provided in the autonomous moving body 9, for example.
  • the distance image is, for example, the result of photographing a horizontal plane.
  • the distance image may be a result calculated based on the internal parameters of the camera as well as the actually observed data.
  • the area boundary distance information is based on the imaged result by the photographing device 902, for example, the distance image obtained in advance by the autonomous moving body control device 1 and the result of machine learning learned in advance. This is the result obtained by calculation using.
  • the result of machine learning that has been learned in advance is, specifically, a process of determining a pixel indicating a region in which the autonomous moving body 9 can move from the shooting result of a monocular camera such as semantic segmentation.
  • the photographing apparatus 902 is a monocular camera
  • the line-of-sight direction is the direction seen from the monocular camera.
  • Reference area information is information that represents candidates for the position, orientation, and shape of the target area.
  • the reference area information is specifically a function representing the position, orientation, and shape of the target area and having one or a plurality of parameters (hereinafter referred to as "reference area expression function"). That is, the reference area information is specifically a mathematical model that represents a candidate for the position, orientation, and shape of the target area.
  • the parameter defines the shape of the function representing the shape of the target area, and is a parameter related to the shape of the target area such as the width of the target area.
  • the reference area representation function is, for example, a function showing the correspondence between the area boundary distance and the line-of-sight angle, and is a function including one or a plurality of parameters.
  • the line-of-sight angle is an angle indicating each line-of-sight direction in a predetermined plane.
  • the parameters include at least a parameter representing the state of the target area as seen from the autonomous mobile body 9.
  • the autonomous mobile control device 1 uses a reference area expression function based on a parameter representing at least the state of the target area as seen from the autonomous mobile body 9. To change. That is, the reference area information changes at least based on the parameter representing the state of the target area as seen from the autonomous mobile body 9.
  • the state of the target area seen from the autonomous moving body 9 is, for example, the inclination, width, and length of the target area seen from the autonomous moving body 9.
  • the reference area expression function does not have to represent only one target area without a branch, and may represent an area with a branch.
  • the reference area representation function may represent one road or a branched road including a branch.
  • the autonomous mobile control device 1 will be described by taking the case where the reference area representation function represents one path as an example.
  • the image 903 is a figure represented on a bird's-eye view as an example of the shape represented by the reference area information. In the figure shown in the image 903, there is a notch corresponding to the field of view of the photographing apparatus 902 in the vicinity of the lower apex of the parallelogram.
  • the process of acquiring the target information based on the reference area information and the area boundary distance information (hereinafter referred to as "extraction process") by the autonomous mobile control device 1 may include an error minimization process and a target information acquisition process. desirable.
  • the target information acquisition process is executed after the error minimization process is executed.
  • the error minimization process determines the value of the parameter that gives the minimum value of the difference between the graph of the map expressing the reference area information and the graph of the map expressing the acquired area boundary distance information (hereinafter referred to as "error"). It is an optimization process to be performed. That is, the error minimization process is a process of determining the condition for minimizing the error.
  • the value determined as the value of the parameter that gives the minimum value of the error by the error minimization process is referred to as a determined value.
  • the reference area expression function specified by the determined value is referred to as a determined function.
  • the definition of a so-called general map graph may be used.
  • the objective information acquisition process is a process of acquiring objective information based on the determined function.
  • the purpose information acquisition process is, for example, a process of acquiring two peak positions indicated by a determined function as road edges in a target area.
  • image 904 in FIG. 1 is a diagram showing an example of the result of error minimization processing. The details of the image 904 will be described with reference to FIG. 6 after explaining the virtual lidar processing (hereinafter referred to as “VLS processing”) and one of the specific examples of the error minimizing processing.
  • VLS processing virtual lidar processing
  • the details of the VLS processing will be described by exemplifying the case where the photographing apparatus 902 is a monocular camera provided in the autonomous mobile body 9.
  • the area boundary distance information is obtained by calculation using the distance image obtained in advance and the result of machine learning learned in advance based on the image taken by the photographing apparatus 902 (hereinafter referred to as “processed image”). It is an example of technology.
  • the VLS processing is a technique used by the autonomous mobile control device 1.
  • the VLS process includes an area division process, a distance mapping process, a boundary pixel information acquisition process, and a distance measurement process in virtual space.
  • the area division processing, the distance mapping processing, and the boundary surface pixel information acquisition processing are executed before the execution of the virtual space distance determination processing, and the area division processing is executed before the execution of the boundary surface pixel information acquisition processing.
  • the boundary pixel information acquisition processing is executed after the execution of the area division processing and the distance mapping processing, and then the virtual space distance measurement processing is executed.
  • the timing of execution of the area division process and the distance mapping process may be earlier or simultaneous. Either the boundary mapping process and the boundary pixel information acquisition process may be executed first or at the same time.
  • the autonomous mobile control device 1 obtains information (hereinafter referred to as "distinguishing information") for distinguishing each area reflected in the image to be processed from other areas by executing the area division processing. For example, by executing the area division processing by the autonomous mobile control device 1, information for distinguishing the target area reflected in the processing target image from other areas can be obtained.
  • FIG. 2 is a diagram showing an example of a processing target image in the embodiment.
  • the image of FIG. 2 shows a road from the lower right to the upper left of the image as a target area.
  • one of the road ends of the road, which is the target area, is the boundary with the lawn.
  • FIG. 3 is a diagram showing an example of the result of the area division processing in the embodiment. More specifically, FIG. 3 is a diagram showing an example of the result of segmentation for the image to be processed. In FIG. 3, the target area reflected in the processing target image is represented separately from other areas reflected in the processing target image.
  • the distance mapping process is a pixel previously associated with each pixel of the processing target image, and the distance information indicated by the pixel of the distance image is obtained from each pixel of the processing target image to which each pixel of the distance image corresponds.
  • This is a process acquired by the autonomous moving body control device 1 as information indicating the attribute of.
  • information indicating the attributes of each pixel of the processing target image to which each pixel of the distance image corresponds is referred to as plane pixel distance information.
  • the plane pixel distance information is acquired under the assumption that all the images reflected in the image to be processed are on the horizontal plane.
  • the distance indicated by each pixel of the distance image is information indicating the distance from the photographing device 902 to the image captured by each pixel of the distance image. That is, since the distance image is an image of the scenery seen by the photographing device 902, the distance indicated by each pixel of the distance image is information indicating the distance from the autonomous moving body 9 to the image captured by each pixel of the distance image.
  • FIG. 4 is a diagram showing an example of a distance image in the embodiment.
  • the distance image of FIG. 4 is a distance image obtained by photographing the horizontal plane by the photographing apparatus 902 whose line of sight is parallel to the horizontal plane.
  • the lighter the color the shorter the distance to the photographing apparatus 902.
  • the boundary pixel information acquisition process is a process in which the autonomous moving body control device 1 acquires information indicating a pixel (hereinafter referred to as “boundary pixel information”) that represents a boundary between regions among the pixels of the image to be processed, based on the distinction information. Is.
  • the distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the distance from the photographing device 902 to the image captured by each pixel indicated by the area pixel information based on the boundary pixel information and the plane pixel distance information. Therefore, the distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the area boundary distance information based on the boundary pixel information and the plane pixel distance information.
  • the distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the distance from the origin to the boundary of the area reflected in the image to be processed on the VLS plane by calculation, for example.
  • the VLS plane is a virtual space having a two-dimensional coordinate system centered on the position of the autonomous moving body 9 (that is, the position of the photographing device 902), and the image reflected in each pixel indicated by the boundary pixel information is the plane pixel distance information. It is a virtual space installed at a position away from the autonomous moving body 9 by the distance indicated by.
  • the origin in the VLS plane is the position in the virtual space where the autonomous mobile body 9 is located.
  • the measurement in the VLS plane is an autonomous moving object that skips the LiDAR signal from the origin in the VLS plane, calculates the time until the scattering or reflection of the LiDAR signal returns to the origin, and converts the calculated time into a distance. This is a process executed by the control device 1 by calculation. Therefore, the result of the measurement on the VLS plane is an example of the region boundary distance information.
  • FIG. 5 is a diagram showing an example of the result of projecting the result of the segmentation shown in FIG. 3 on the VLS plane in the embodiment.
  • the horizontal and vertical axes of the results of FIG. 5 indicate the axes of the Galileo coordinate system.
  • the position where the value on the horizontal axis of FIG. 5 is 0 and the value on the vertical axis is 0 is the position of the photographing apparatus 902 (that is, the position of the autonomous moving body 9).
  • the boundary of the trapezoidal region A1 in FIG. 5 represents the boundary of the viewing angle of the photographing apparatus 902.
  • VLSroad is the distance from the origin in the VLS plane to the boundary of the reference area information.
  • the VLSparallelogram is an approximate shape from the origin when the shape of a target area such as a road is approximated by a predetermined shape such as a parallelogram (hereinafter referred to as "approximate shape") without considering the boundary of the field of view of the photographing apparatus 902.
  • the distance from the origin in the VLS plane to the boundary More specifically, the approximate shape is a shape that approximates the shape of the target area with a predetermined shape without considering the boundary of the field of view of the photographing apparatus 902, and is a shape when the shape of the target area is expressed on a bird's-eye view. It is a figure that approximates.
  • the approximate shape is, for example, a parallelogram.
  • VLSmap represents the distance from the origin in the VLS plane to the boundary of the field of view of the photographing apparatus 902.
  • VLSmap is an external parameter of a monocular camera, ae. param, internal parameters of monocular camera ai. It is calculated based on param and the measurement range range of the map.
  • External parameters of the monocular camera ae. Specifically, param is, for example, the position and posture of a monocular camera.
  • the VLSroad has a slope ⁇ angle of the target area, a length ⁇ length of the target area, and a width ⁇ l on the left side of the area. width, right width ⁇ r. It depends on width.
  • the internal parameters of the monocular camera ae. Param is expressed as ⁇
  • the external parameter of the monocular camera is expressed as A
  • the line-of-sight angle is expressed as ⁇ .
  • VLSroad is formulated by the following equations (2) to (4).
  • VLSroad formulated by the equations (2) to (4) is an example of reference area information.
  • a process of estimating the value of the parameter included in the VLSroad is performed.
  • the parameter values are estimated by optimization that minimizes the error by the method of least squares.
  • the result of optimization by the least squares method is the coefficient of determination.
  • Xi represents the line-of-sight angle in the VLS plane
  • y represents the distance from the origin in the VLS plane at the line-of-sight angle xi.
  • FIG. 6 shows an example of the result of error minimization processing using the equations (1) to (6).
  • FIG. 6 is a diagram showing an example of the result of the error minimization processing in the embodiment.
  • the horizontal axis of FIG. 6 indicates the line-of-sight angle.
  • the vertical axis of FIG. 6 represents a distance.
  • the unit is the unit of distance.
  • FIG. 6 shows an example of the area boundary distance information and an example of the result of the error minimization processing.
  • FIG. 6 shows that the result of the error minimization process matches the graph shown by the region boundary distance information with high accuracy.
  • the results in FIG. 6 show that there are peaks at a line-of-sight angle of 140 ° and a line-of-sight angle of 170 °.
  • the line-of-sight angle of 140 ° and the line-of-sight angle of 170 ° indicate the ends of regions such as the shoulder of the road, respectively. Therefore, the line-of-sight angle indicating the centers of the two peaks is the angle indicating the center of the target area.
  • FIG. 7 is a diagram showing an example of purpose information in the embodiment.
  • the vertical axis of FIG. 7 represents the distance from the autonomous mobile body 9.
  • the unit is the unit of distance.
  • the horizontal axis of FIG. 7 indicates each line-of-sight direction in the horizontal plane. More specifically, the horizontal axis of FIG. 7 indicates an angle indicating each line-of-sight direction in the horizontal plane (that is, a line-of-sight angle in the horizontal plane). Therefore, when the photographing apparatus 902 is a 3D LiDAR, the horizontal axis in FIG. 7 indicates the measurement angle. In FIG. 7, the traveling direction of the autonomous moving body 9 is 180 °.
  • FIG. 7 shows the boundary of the road under ideal conditions and the boundary of the field of view of the photographing apparatus 902. More specifically, in FIG. 7, the "distance to the boundary of the field of view” is information indicating the boundary of the field of view, and is the distance from the photographing device 902 to the boundary of the field of view of the photographing device 902 (hereinafter referred to as "field of view boundary distance"). .) Is shown.
  • the “ideal conditions” are information on each boundary shown in FIG. 8 described later (specifically, "distance to the boundary of the visual field" in FIG.
  • FIG. 7 is a diagram showing an example of the target information in the embodiment, and corresponds to the plan view shown in FIG.
  • FIG. 7 “thing that does not consider the boundary of the visual field” is an example of the result of displaying the boundary of an approximate shape such as a parallelogram in a graph with the horizontal axis as the line-of-sight angle and the vertical axis as the distance.
  • an example of target information is an example of the result of the target information acquisition process, and is an example of information indicating the direction of the target area.
  • FIG. 7 is also a diagram showing the center of the target area in the range of the line-of-sight angle of 120 ° to 260 °.
  • the one considering the boundary of the field of view is a function indicating the boundary of the shape in which a notch corresponding to the field of view of the photographing apparatus 902 exists near the lower vertex of the approximate shape such as a parallelogram.
  • the result of the error optimization processing used as the reference area representation function is shown.
  • the notch is an example of an out-of-field boundary.
  • the shape in which the notch corresponding to the field of view of the photographing apparatus 902 exists near the lower apex of the parallelogram is, for example, the shape shown in the image 903.
  • FIG. 8 is a diagram showing an example of an out-of-field boundary in the embodiment.
  • the position where the value on the horizontal axis is 0 and the value on the vertical axis is 0 is the position of the photographing apparatus 902 on the VLS plane.
  • FIG. 8 shows an example of the boundary of the field of view of the photographing apparatus 902 in the VLS plane.
  • FIG. 8 shows an example of the shape represented by the reference region representation function in the VLS plane.
  • FIG. 8 shows a parallelogram having a notch as an example of the shape represented by the reference region representation function.
  • the boundary represented by the alternate long and short dash line in FIG. 8 is an example of an out-of-field boundary. Note that FIG. 8 shows the VLS plane.
  • FIG. 8 represents the distance from the position of the photographing apparatus 902 on the VLS plane.
  • the vertical axis of FIG. 8 is the distance from the position of the photographing apparatus 902 on the VLS plane and represents the distance in the direction orthogonal to the horizontal axis of FIG.
  • the approximate shape is not limited to the parallelogram. That is, the shape represented by the reference area representation function is not limited to a parallelogram or a parallelogram having a notch. Other specific examples of the shape represented by the reference area representation function will be described later in consideration of the ease of explanation.
  • the process of acquiring the number of branches by the extraction process is, for example, the following process.
  • an error minimization process using one function expressed by using K reference area expression functions (K is an integer of 1 or more) is executed for each K.
  • the value of K having the smallest error is acquired as the number of reference regions (that is, the number of branches).
  • M in the equation (7) indicates the number of parameters used for estimation, and M is the product of the number of reference regions (K) and the number of parameters of the reference region expression function.
  • the process of acquiring the value of K is an example of the object information acquisition process.
  • the position of the autonomous moving body 9 is the position of the intersection. In this way, the extraction process acquires information indicating whether or not the position of the autonomous mobile body 9 is an intersection.
  • a specific example of the error in the process of acquiring the number of branches by the extraction process is, for example, the Bayesian Information Criterion (BIC) represented by the following equation (7).
  • L represents the likelihood and N represents the number of samples for which the region boundary distance was observed. Therefore, N is, for example, the number of points indicating the area boundary distance information in FIG.
  • Likelihood is, for example, SSE, which is the total value of the root-mean-squared error between the region boundary distance and the result of optimization.
  • each of the reference area expression functions included in the compound function is not necessarily the same reference area expression. It does not have to be a function, and at least one may differ from the other reference area representation functions.
  • FIG. 9 is a diagram showing an example of the result of error minimum processing when there is an intersection in the embodiment.
  • the horizontal axis of FIG. 9 indicates the line-of-sight angle.
  • the vertical axis of FIG. 9 represents the distance.
  • the unit is the unit of distance.
  • FIG. 9 shows the result of the error minimization process executed under the condition that K is 1, and the result of the error minimization process executed under the condition that K is 2. Further, FIG. 9 shows the result of actually surveying the end of the road as a data point of the survey result.
  • FIG. 10 is a diagram showing an example of the result of intersection segmentation in the embodiment.
  • FIG. 10 shows that it branches into two roads.
  • FIG. 11 is an example of a result showing a bird's-eye view of the result of segmentation at an intersection and a result showing a visual field boundary distance in the embodiment.
  • the upper figure of FIG. 11 shows an example of the result of expressing the result of the segmentation shown in FIG. 9 on a bird's-eye view.
  • the lower diagram of FIG. 11 shows the region boundary distance information obtained from the upper diagram of FIG.
  • FIG. 12 is a diagram showing an example of the functional configuration of the autonomous mobile control device 1 of the embodiment.
  • the autonomous mobile control device 1 includes a control unit 10 including a processor 91 such as a CPU (Central Processing Unit) connected by a bus and a memory 92, and executes a program.
  • the autonomous mobile control device 1 functions as a device including a control unit 10, an input unit 11, a communication unit 12, a storage unit 13, and an output unit 14 by executing a program. More specifically, the processor 91 reads out the program stored in the storage unit 13, and stores the read program in the memory 92. By executing the program stored in the memory 92 by the processor 91, the autonomous mobile control device 1 functions as a device including a control unit 10, an input unit 11, a communication unit 12, a storage unit 13, and an output unit 14. ..
  • a control unit 10 including a processor 91 such as a CPU (Central Processing Unit) connected by a bus and a memory 92, and executes a program.
  • the autonomous mobile control device 1
  • the control unit 10 executes, for example, an extraction process.
  • the control unit 10 controls, for example, the operation of various functional units included in the autonomous mobile body control device 1 and the operation of the autonomous mobile body 9.
  • the control unit 10 controls the operation of the communication unit 12, for example, and acquires the image to be processed via the communication unit 12.
  • the control unit 10 acquires area boundary distance information based on, for example, the acquired image to be processed.
  • the control unit 10 may acquire the area boundary distance information instead of the image to be processed.
  • the control unit 10 executes, for example, an extraction process.
  • the control unit 10 controls the operation of the autonomous mobile body 9 via, for example, the communication unit 12.
  • the control unit 10 may acquire information indicating the position and orientation of the autonomous mobile body 9 (hereinafter referred to as “progress state information”) via, for example, the communication unit 12.
  • the control unit 10 may estimate the position and orientation of the autonomous mobile body 9 based on, for example, the history of control of the operation of the autonomous mobile body 9.
  • the input unit 11 includes an input device such as a mouse, a keyboard, and a touch panel.
  • the input unit 11 may be configured as an interface for connecting these input devices to its own device.
  • the input unit 11 receives input of various information to its own device.
  • the communication unit 12 includes a communication interface for connecting the own device to an external device.
  • the communication unit 12 communicates with the autonomous mobile body 9 via wire or wireless.
  • the communication unit 12 receives, for example, progress information of the autonomous mobile body 9 by communicating with the autonomous mobile body 9.
  • the communication unit 12 transmits a control signal for controlling the autonomous mobile body 9 to the autonomous mobile body 9 by communicating with the autonomous mobile body 9.
  • the communication unit 12 communicates with the source of the image to be processed via wired or wireless.
  • the communication unit 12 acquires the image to be processed by communicating with the source of the image to be processed.
  • the source of the image to be processed may be the autonomous moving body 9 itself, or may be another device such as a drone that moves together with the autonomous moving body 9.
  • the storage unit 13 is configured by using a non-temporary computer-readable storage medium device such as a magnetic hard disk device or a semiconductor storage device.
  • the storage unit 13 stores various information about the autonomous mobile control device 1.
  • the storage unit 13 stores, for example, the history of control of the autonomous mobile body 9 by the control unit 10.
  • the storage unit 13 stores, for example, a history of progress information.
  • the storage unit 13 stores the reference area information in advance.
  • the storage unit 13 stores a distance image in advance.
  • the output unit 14 outputs various information.
  • the output unit 14 includes display devices such as a CRT (Cathode Ray Tube) display, a liquid crystal display, and an organic EL (Electro-Luminescence) display.
  • the output unit 14 may be configured as an interface for connecting these display devices to its own device.
  • the output unit 14 outputs, for example, the information input to the input unit 11 or the communication unit 12.
  • the output unit 14 outputs, for example, the execution result of the extraction process by the control unit 10.
  • FIG. 13 is a diagram showing an example of the functional configuration of the control unit 10 in the embodiment.
  • the control unit 10 includes a progress state information acquisition unit 101, a region boundary distance information acquisition unit 102, a reference area information acquisition unit 103, a target information acquisition unit 104, and a control signal generation unit 105.
  • the progress status information acquisition unit 101 acquires the progress status information of the autonomous mobile body 9.
  • the progress state information acquisition unit 101 may acquire the progress state information by calculating from the history of control of the operation of the autonomous mobile body 9, or may acquire the progress state information from the autonomous mobile body 9 via the communication unit 12. good.
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information.
  • the area boundary distance information is the area boundary distance information from the source of the area boundary distance information of the photographing device 902 or the like via the communication unit 12 when the photographing device 902 is a device capable of acquiring the area boundary distance information such as 3DLiDAR. To get.
  • the photographing device 902 is a device for acquiring a processing target image such as a monocular camera
  • the region boundary distance information acquisition unit 102 acquires the processing target image via the communication unit 12, and the VLS for the acquired processing target image.
  • the area boundary distance information is acquired by executing the process.
  • the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13. More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13.
  • the target information acquisition unit 104 executes the extraction process and acquires the target information.
  • the control signal generation unit 105 generates a control signal that controls the operation of the autonomous mobile body 9 based on the target information.
  • the control signal generation unit 105 transmits the generated control signal to the autonomous mobile body 9 via the communication unit 12.
  • FIG. 14 is a diagram showing an example of a flow of processing executed by the autonomous mobile control device 1 of the embodiment. The process of FIG. 14 is repeatedly executed at a predetermined timing.
  • Progress status information acquisition unit 101 acquires progress status information (step S101).
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information (step S102).
  • the area boundary distance information acquisition unit 102 may acquire the area boundary distance information from the source of the area boundary distance information via the communication unit 12, or acquires and acquires the processing target image via the communication unit 12. Area boundary distance information may be acquired by executing VLS processing on the image to be processed.
  • the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13. More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13 (step S103). Next, the target information acquisition unit 104 executes an extraction process using the reference area information and the area boundary distance information to acquire the target information (step S104). Next, the control signal generation unit 105 generates a control signal for controlling the operation of the autonomous mobile body 9 based on the target information, and controls the operation of the autonomous mobile body 9 by the generated control signal (step S105).
  • step S101 may be executed at any timing before the process of step S105 is executed. Further, steps S102 and S103 do not necessarily have to be executed in this order, and may be executed in any order as long as they are executed before the execution of step S104.
  • the autonomous mobile control device 1 executes the step of acquiring the area boundary distance information. Further, the autonomous mobile body control device 1 executes a step of acquiring target information which is information indicating the relationship between the autonomous mobile body 9 and the target area based on the reference area information and the area boundary distance information.
  • FIG. 15 is a flowchart showing an example of the flow of the process in which the area boundary distance information acquisition unit 102 in the embodiment acquires the area boundary distance information. More specifically, FIG. 15 is a flowchart showing an example of a flow of processing in which the area boundary distance information acquisition unit 102 acquires the area boundary distance information when the photographing device 902 is a monocular camera.
  • the area boundary distance information acquisition unit 102 acquires the image to be processed via the communication unit 12 (step S201). Next, the area boundary distance information acquisition unit 102 executes the area division process (step S202). Next, the area boundary distance information acquisition unit 102 executes the boundary pixel information acquisition process (step S203). Next, the area boundary distance information acquisition unit 102 executes the distance mapping process (step S204). Next, the area boundary distance information acquisition unit 102 executes the distance measurement process in the virtual space (step S205). If the processes of step S203 and step S204 are executed after the execution of step S202 and before the execution of step S205, they may be executed at any timing. Therefore, for example, step S204 may be executed after step S202, and then step S203 may be executed. Further, for example, step S203 and step S204 may be executed at the same timing.
  • the area boundary distance information acquisition unit 102 In the process of acquiring the area boundary distance information by the area boundary distance information acquisition unit 102, when the area boundary distance information is input from the external device via the communication unit 12, the area boundary input to the communication unit 12 is performed. This is a process in which the area boundary distance information acquisition unit 102 acquires the distance information.
  • FIG. 16 is an explanatory diagram illustrating an example of the relationship between the moving body main body 905 of the autonomous moving body 9, the photographing device 902, and the horizontal plane in the embodiment.
  • the mobile body 905 includes wheels for moving the autonomous mobile body 9, a movable portion, and a control unit for controlling the movement.
  • the autonomous mobile body 9 includes a mobile body main body 905 and a photographing device 902.
  • the photographing apparatus 902 is located above the moving body main body 905, and is located at a height h from the horizontal plane on which the moving body main body 905 is located.
  • the tilt angle means the angle (angle with the horizontal plane) forming the optical axis of the camera with the vertical downward direction.
  • the dashed line means the edge of the field of view.
  • parameters in the VLS plane generated based on the image to be processed are shown. These parameters are examples of parameters.
  • the parameters appearing in the description of the specific example of the reference area representation function are examples of parameters.
  • FIG. 17 is a diagram showing an example of parameters used in the formula representing the distance in the embodiment.
  • the y-axis represents the front direction of the camera.
  • the x-axis is the direction perpendicular to the y-axis in the VLS plane.
  • the shape of the boundary of the field of view of the camera on the VLS plane is a trapezoidal shape having the position of the camera as the base and the distance m (map_height) reflected on the VLS plane from the image to be processed as the height.
  • the tilt of the left and right sides is set by the internal parameters of the camera.
  • the center of the virtual lidar is located in the VLS plane at a position where the position in the y direction is a distance y0 from the origin and a position in the x direction is a position where the distance x0 is from the origin.
  • the virtual lidar is a source of the signal of the virtual lidar in the VLS plane.
  • the first specific example is a specific example of an equation (hereinafter referred to as "first distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a straight line.
  • first distance equation an equation expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a straight line.
  • the first distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
  • the shape of the road is the slope of the road ⁇ angle, the length of the road ⁇ length, and the width on the left side of the road ⁇ l. width, right width of the road ⁇ r. It is formulated using each parameter of width. That is, the shape of the road is expressed by the above-mentioned equation (3).
  • FIG. 18 is an explanatory diagram illustrating parameters used for formulating the shape of a road whose shape is straight in the embodiment.
  • the length of the road is the distance from the camera position of the center of the camera to the end of the road in the y-axis direction.
  • parameters are set separately for the road width on the right side and the road width on the left side of the virtual lidar, and the parameters are set so that the position of the autonomous moving body 9 on the road can be estimated.
  • the formula expressing the shape of a straight road is, for example, the following formula (8).
  • the first distance equation is formulated using equation (8).
  • the idea for derivation is as follows. That is, by assuming a scene in which a signal is transmitted from the center of the virtual lidar toward the end of the road, and then assuming a process until the transmitted signal reaches the end of the road, from the center of the virtual lidar. Formulate the distance to the intersection of the signal and the end of the road.
  • FIG. 19 is a diagram showing an example of the propagation of a signal transmitted from the center of the virtual lidar when the shape of the road in the embodiment is a straight line.
  • FIG. 19 shows an example of the order in which the measurement by the signal of Virtual Lidar is performed. Specifically, FIG. 19 is performed clockwise at equal intervals of 360 degrees with the ⁇ y axis direction (that is, the negative direction in the y-axis direction) as 0 degree. The interval is arbitrary. Point P1, point P2, point P3, and point P4 in FIG. 19 indicate vertices of an approximate shape, respectively.
  • Angle th1, angle th2, angle th3, and angle th4 are the angles formed by the -y axis and the line connecting the points P1, P2, P3, and P4 in FIG. 19 and the center of the virtual lidar. be.
  • the unit of angle th and angle is radian.
  • the second specific example is a specific example of an equation (hereinafter referred to as "second distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a curve.
  • the second distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
  • the shape of the road is the slope of the straight line to the curve ⁇ angle, the length of the road to the entrance of the curve ⁇ D1, and the distance to the left road ⁇ l. width, distance to the right road ⁇ r. It is formulated using the parameters of width and road width ⁇ wise2 at the end of the curve. The extraction process estimates the values of these parameters.
  • the curve is formulated as an elliptical shape.
  • the width of the ellipse is formulated using the road width ⁇ width in front, and the vertical width of the ellipse is formulated using the road width ⁇ width 2 at the end of the curve. That is, the shape of the road is expressed by the following equation (21).
  • FIG. 20 is an explanatory diagram illustrating parameters used for formulating the shape of a road whose shape is a curve in the embodiment.
  • a process of rotating the entire shape with the center of the VisualLidar as the origin may be executed.
  • Equation (22) represents a right curve
  • equation (23) represents a left curve
  • FIG. 21 is a first diagram showing an example of auxiliary points used for formulating the second distance equation in the embodiment.
  • FIG. 21 shows the shape of the left curve.
  • the points P1, P2, and P3 in FIG. 21 are auxiliary points used for formulating the second distance equation.
  • Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
  • FIG. 22 is a second diagram showing an example of auxiliary points used for formulating the second distance equation in the embodiment.
  • FIG. 22 shows the shape of the right curve.
  • the points P1, P2, and P3 in FIG. 22 are auxiliary points used for formulating the second distance equation.
  • Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
  • the model (a set of mathematical formulas representing the shape) is switched before and after the angle th1, the angle th2, and the angle th3, and the straight line, the first straight line of the ellipse, and the orthogonal straight line are three.
  • the curve is expressed using a mathematical formula that expresses the shape.
  • the set of the following equations (24) to (40) is an example of the second distance equation.
  • the unit of angle th and angle is radian.
  • Equation (36) is an equation that holds when the road ahead of the curve goes straight to the first road. Further, when the center of the ellipse is xc and yc and the intersection of the virtual lidar signal and the ellipse is defined as x_e and y_e, x_eth and y_eth can be obtained from the simultaneous equations of the equation (37). As a result, the value on the left side of the equation (38) is acquired. Equations (39) and (40) represent operations performed in the extraction process when angle ⁇ 0. More specifically, all VLScurve. th, VLSsecond_road. Represents the operation executed after the calculation of th.
  • the third specific example is a specific example of an equation (hereinafter referred to as "third distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the road is an intersection.
  • the third distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
  • the shape of the T-shaped road is expressed by an equation whose parameter is the distance ⁇ D1 to the intersection, in addition to the parameters used to express the curve. Therefore, the shape of the T-shaped road is expressed by the following equation (41).
  • FIG. 23 is a diagram showing an example of the shape of the T-shaped road in the embodiment.
  • FIG. 23 shows a road that goes from the bottom of the screen to the top (that is, in the positive direction of the y-axis), has an intersection, and branches into a road that goes to the left and a road that goes up.
  • the following formula (42) represents the shape of the right-shaped road.
  • the following formula (43) represents the shape of the left T-shaped road.
  • FIG. 24 is a diagram showing an example of auxiliary points used for formulating the third distance equation in the embodiment.
  • FIG. 24 shows the shape of a left T-shaped road.
  • the points P1, P2, and P3 in FIG. 24 are auxiliary points used for formulating the third distance equation.
  • Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
  • Formulas are switched before and after angle th1, angle th2, angle th3, and angle th4, and there are two formulas that represent the shape of the intersection: a straight line and a straight line that is orthogonal to the first straight line. Expressed using an equation.
  • the following formula (45) is an example of the third distance formula at an intersection.
  • the following equation (46) is an example of the third distance equation in a right-to-junction.
  • the following formula (47) is an example of the third distance formula in the left T-junction.
  • the following formula (48) is an example of the third distance formula at the junction.
  • the classification of a straight line and a curve is a process of classifying a road for an observation result by Visual Lidar, specifically, using an equation of a straight line, a right curve, and a left curve.
  • the classification of a straight line and a curve is, more specifically, a process of determining whether the target road is a straight line or a curve.
  • 25 and 27 show an example of the bird's-eye view image and the distance from the center of the virtual lidar in the VLS plane to the boundary.
  • 26 and 28 show the results estimated by the equations of the straight line, the right curve, and the left curve.
  • FIG. 25 is a first explanatory diagram illustrating an example of the result of classification in the embodiment.
  • FIG. 25 shows that the road is a straight road.
  • FIG. 26 is a second explanatory diagram illustrating an example of the result of classification in the embodiment. More specifically, FIG. 26 shows the results of classification for the road shown in FIG. 25.
  • “straight” indicates the result of estimation by the equation of the straight line
  • “Right curve” indicates the result of estimation by the equation of the right curve
  • “Left curve” indicates the result of estimation by the equation of the left curve. show.
  • FIG. 26 shows that the linear equation is selected, and the degree of agreement between the estimation result and the observation result is high when the linear equation is used. Therefore, FIG. 26, together with the result of FIG. 25, shows that the shape of the road was estimated with high accuracy.
  • FIG. 27 is a third explanatory diagram illustrating an example of the result of classification in the embodiment.
  • FIG. 27 shows that the road is a right curve.
  • FIG. 28 is a fourth explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 28 shows the results of classification for the road shown in FIG. 27.
  • “straight” indicates the result of estimation by the equation of the straight line
  • “Right curve” indicates the result of estimation by the equation of the right curve
  • “Left curve” indicates the result of estimation by the equation of the left curve. show.
  • FIG. 28 shows that the equation of the right curve is selected, and the degree of agreement between the estimation result and the observation result is high when the equation of the right curve is used. Therefore, FIG. 28, together with the result of FIG. 27, shows that the shape of the road was estimated with high accuracy.
  • the classification of intersections is a process of classifying roads for observation results by Visual Lidar using the formulas of straight lines, right-handed characters, left-handed characters, and junctions. More specifically, the classification of intersections is a process of determining whether the target road is a straight line, a right-handed character, a left-handed character, or a junction. 29, 31, and 33 show an example of the bird's-eye view image and the distance from the center of the virtual lidar in the VLS plane to the boundary. 30, FIG. 32, and FIG. 34 show the results estimated by the equations of a straight line, a right-handed character, a left-handed character, and a junction.
  • FIG. 29 is a fifth explanatory diagram illustrating an example of the results of classification in the embodiment.
  • FIG. 29 shows that the shape of the road is a right-handed shape.
  • FIG. 30 is a seventh explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 30 shows the results of classification for the road shown in FIG. 29.
  • "straight” indicates the result of estimation by the straight line formula
  • "Left insec” indicates the result of estimation by the left T-shaped formula
  • "T insec” indicates the result of estimation by the junction formula.
  • "Right insec” indicates the result of estimation by the right-to-character formula.
  • FIG. 30 shows that the right-to-character formula is selected, and the degree of agreement between the estimation result and the observation result is high when the right-to-character formula is used. Therefore, FIG. 30 together with the result of FIG. 29 shows that the shape of the road was estimated with high accuracy.
  • FIG. 31 is an eighth explanatory diagram illustrating an example of the results of classification in the embodiment.
  • FIG. 31 shows that the shape of the road is a left T-shape.
  • FIG. 32 is a ninth explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 32 shows the results of classification for the road shown in FIG.
  • “straight” indicates the result of estimation by the linear equation
  • “Left insec” indicates the estimation result by the left-character equation
  • “T insec” indicates the estimation result by the junction equation
  • “Right insec” indicates the result of estimation by the right-to-character formula.
  • FIG. 32 shows that the left T-shaped formula is selected, and the degree of agreement between the estimation result and the observation result is high when the left T-shaped formula is used. Therefore, FIG. 32, together with the result of FIG. 31, shows that the shape of the road was estimated with high accuracy.
  • FIG. 33 is a tenth explanatory diagram illustrating an example of the result of classification in the embodiment.
  • FIG. 33 shows that the shape of the road is the shape of a junction.
  • FIG. 34 is an eleventh explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 34 shows the results of classification for the road shown in FIG. 33.
  • “straight” indicates the result of estimation by the linear equation
  • “Left insec” indicates the estimation result by the left-character equation
  • “T insec” indicates the estimation result by the junction equation
  • “Right insec” indicates the result of estimation by the right-to-character formula.
  • FIG. 34 shows that the equation of the junction is selected, and the degree of agreement between the estimation result and the observation result is high when the equation of the junction is used. Therefore, FIG. 34, together with the result of FIG. 33, shows that the shape of the road was estimated with high accuracy.
  • the horizontal axis of the graph G1 of FIG. 26, the graph G3 of FIG. 28, the graph G5 of FIG. 30, the graph G7 of FIG. 32, and the graph G9 of FIG. 34 represents the line-of-sight angle, and the vertical axis represents the distance. ..
  • the horizontal axis of the graph G2 of FIG. 26, the graph G4 of FIG. 28, the graph G6 of FIG. 30, the graph G8 of FIG. 32, and the graph G10 of FIG. Represents the coordinate value of the y-axis of the VLS plane.
  • the autonomous mobile control device 1 of the embodiment configured in this way determines the condition for minimizing the error by using the reference area information based on the area boundary distance information, and acquires the target information from the determined condition. Therefore, the autonomous mobile body control device 1 configured in this way can improve the accuracy of the movement of the autonomous mobile body 9.
  • the road area may not be correctly grasped.
  • the region may be estimated by performing fitting (that is, error minimization processing) that ignores dynamic obstacles.
  • FIG. 35 shows the execution result of fitting (specifically, error minimization processing) in which the dynamic obstacle is ignored by the autonomous mobile control device 1 when a part of the subject in the modified example is a dynamic obstacle. It is a figure which shows an example.
  • the horizontal axis of FIG. 35 represents the line-of-sight angle, and the vertical axis of FIG. 35 represents the distance.
  • “Deleted data” in FIG. 35 is an example of the measurement result of Visual Lidar for a dynamic obstacle.
  • Truste data” in FIG. 35 is data of a subject that is not a dynamic obstacle. That is, unlike "deleted data", the data is not ignored in the error minimization process.
  • “Statized curve” in FIG. 35 is an example of the result of fitting (that is, error minimization processing) using only the result of “true data” while ignoring the dynamic obstacle.
  • FIG. 35 shows that the region is appropriately estimated by the autonomous mobile controller 1 even when the dynamic obstacle is ignored.
  • the reference area information using a plurality of reference area expression functions is used.
  • FIG. 36 is a flowchart showing an example of the flow of processing executed by the autonomous mobile control device 1 when a part of the subject in the modified example is a dynamic obstacle.
  • the progress status information acquisition unit acquires the progress status information (step S301).
  • the area boundary distance information acquisition unit 102 acquires the image to be processed (step S302).
  • the area boundary distance information acquisition unit 102 uses a segmentation model, which is a trained model previously recorded in the storage unit 13 and is a trained model for determining which of the predetermined categories the pixels belong to. Read from the storage unit 13 (step S303).
  • Predetermined categories include at least dynamic obstacles.
  • the area boundary distance information acquisition unit 102 acquires the pixel values of the pixels centered on the target pixels, which are the pixels of the image to be processed and are selected according to a predetermined rule (step S304). Next, the area boundary distance information acquisition unit 102 determines the category to which the target pixel belongs by using the segmentation model (step S305). Next, the category to which the target pixel belongs is recorded in the storage unit 13 (step S306). When it is determined in step S305 that the category belonging to the target pixel is a dynamic obstacle, the storage unit 13 records the dynamic obstacle as the category to which the target pixel belongs. When a category other than the dynamic obstacle (hereinafter referred to as "category A”) is determined in step S305 as the category to which the target pixel belongs, the storage unit 13 has the category A as the category to which the target pixel belongs. Recorded.
  • step S306 the area boundary distance information acquisition unit 102 determines whether or not the category has been determined for all the pixels (step S307). When there is a pixel for which the category has not been determined yet (step S307: NO), the area boundary distance information acquisition unit 102 selects the next target pixel according to a predetermined rule (step S308). The next target pixel is, for example, a pixel next to the current target pixel. After step S308, the process returns to step S304.
  • step S307 when the category is determined for all the pixels (step S307: YES), the area boundary distance information acquisition unit 102 executes the boundary pixel information acquisition process (step S309).
  • the area boundary distance information acquisition unit 102 acquires the values of the pixels other than the pixels whose category to which the processing of step S305 belongs is determined to be a dynamic obstacle among the pixels of the image to be processed (step S310). ..
  • step S311 the area boundary distance information acquisition unit 102 executes the distance mapping process using the value acquired in step S310 (step S311). Therefore, in the process of step S311, the value of the pixel whose category is determined to be a dynamic obstacle by the process of step S305 is not used.
  • step S312 the area boundary distance information acquisition unit 102 executes a virtual space distance measurement process using the result of step S311 (step S312). By executing the process of step S312, the area boundary distance information is obtained.
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information after deleting the information of the dynamic obstacle. Deleting the information of the dynamic obstacle means not using the value of the pixel determined to belong to the dynamic obstacle, and specifically means the processing of step S310.
  • the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13 (step S313). More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13. Next, the target information acquisition unit 104 executes an error minimization process using the reference area information and the area boundary distance information acquired in step S312 (step S314).
  • the error minimization process is a process of determining a condition for minimizing an error, which is a difference between the reference area information obtained in step S313 and the area boundary distance information obtained in step S312.
  • the target information acquisition unit 104 executes the target information acquisition process (step S315).
  • the control signal generation unit 105 generates a control signal for controlling the operation of the autonomous mobile body 9 based on the target information, and controls the operation of the autonomous mobile body 9 by the generated control signal (step S316).
  • the error minimization process executed in the process of step S314 is a fitting that ignores the dynamic obstacles described in FIG. 35 and its description.
  • the fitting that ignores the dynamic obstacle means the fitting that does not use the data indicating the distance to the dynamic obstacle in the area boundary distance information.
  • the target information acquisition unit 104 acquires target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information. ..
  • Reference area information is used in the error minimization process.
  • the reference area information is information using one or more reference area expression functions which are functions representing the position, orientation, and shape of the target area and have one or a plurality of parameters.
  • the area boundary distance information is information indicating the distance from the autonomous mobile body 9 to each position on the boundary of the target area.
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information after deleting the information of the dynamic obstacle
  • the target information acquisition unit 104 acquires the reference area information and the area boundary.
  • the target information is acquired by executing the error minimization process for determining the condition for minimizing the error, which is the difference from the distance information.
  • FIG. 37 is a flowchart showing an example of the flow of generating the segmentation model in the modified example. Before explaining the flowchart, an outline of generating a segmentation model will be given.
  • the segmentation model is a mathematical model prepared in advance and is an updated mathematical model (hereinafter referred to as "learning stage model") that estimates the category to which each pixel of the image belongs based on the input image by a machine learning method.
  • learning stage model an updated mathematical model that estimates the category to which each pixel of the image belongs based on the input image by a machine learning method.
  • the resulting trained mathematical model is a mathematical model prepared in advance and is an updated mathematical model (hereinafter referred to as "learning stage model") that estimates the category to which each pixel of the image belongs based on the input image by a machine learning method.
  • a mathematical model is a set that includes one or more processes in which the conditions and order of execution (hereinafter referred to as "execution rules") are predetermined. For the sake of simplicity of the explanation below, updating a mathematical model by a machine learning method is called learning. Further, updating the mathematical model means appropriately adjusting the values of the parameters included in the mathematical model. Further, the execution of the mathematical model means that each process included in the mathematical model is executed according to the execution rule.
  • the learning stage model may be configured in any way as long as it is a mathematical model updated by a machine learning method.
  • the learning stage model is composed of, for example, a neural network.
  • the learning stage model may be composed of a neural network including, for example, a convolutional neural network.
  • the learning stage model may be composed of a neural network including, for example, an autoencoder.
  • the training sample used for learning the learning stage model is the paired data of the image and the annotation indicating the category to which each pixel of the image belongs.
  • the loss function used to update the learning stage model is a function whose value indicates the difference between the annotation and the category of each pixel estimated based on the input image.
  • Annotations are, for example, data expressed in tensors.
  • Updating the training stage model means updating the values of the parameters included in the training stage model according to a predetermined rule so as to reduce the value of the loss function.
  • the training sample is input to the learning stage model (step S401).
  • the category is estimated for each pixel of the image including the input training sample (step S402).
  • step S403 the value of the parameter included in the learning stage model is updated so as to reduce the value of the loss function.
  • the value of the parameter included in the learning stage model is updated, it means that the learning stage model is updated.
  • step S404 it is determined whether or not a predetermined end condition (hereinafter referred to as “learning end condition”) is satisfied (step S404).
  • the learning end condition is, for example, a condition that a predetermined number of updates have been performed.
  • step S404 When the learning end condition is satisfied (step S404: YES), the learning stage model is recorded in the storage unit 13 as a segmentation model (step S405). On the other hand, if the learning end condition is not satisfied (step S404: NO), the process returns to the process of step S402. Depending on the learning algorithm, the process returns to the process of step S401, and a new training sample is input to the learning stage model.
  • the experiment was aimed at estimating parameters on a straight road. Specifically, there were three parameters: the slope of the road, the width of the road on the right, and the width of the road on the left.
  • the experiment three experiments from the first experiment to the third experiment under different conditions were carried out.
  • the first experiment was an outdoor experiment using a monocular camera as a photographing device 902.
  • the experiment was conducted at two places, the first outdoor and the second outdoor.
  • the second experiment was an indoor experiment using a monocular camera as the photographing apparatus 902.
  • the experiment was conducted in two places, the first indoor and the second indoor.
  • the third experiment was an indoor experiment using 2DLiDAR (2dimensional Light Detection And Ringing) as the photographing apparatus 902.
  • the inclinations were -30 degrees, -20 degrees, -10 degrees, 0 degrees, 10 degrees, 20 degrees, and 30 degrees.
  • the road width was the left and right road width. Road width was measured during the experiment.
  • FIG. 38 is a diagram showing the experimental environment of the first experiment conducted outdoors in the modified example.
  • FIG. 38 shows a first outdoor photograph.
  • FIG. 39 is a first diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 39 shows the result of projecting the result of segmentation on the image of FIG. 38 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 40 is a second diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 40 shows that the shape of the road can be appropriately estimated using the observed values in the VLS plane.
  • FIG. 41 is a third diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 41 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 42 is a diagram showing the experimental environment of the first experiment conducted outdoors in the modified example.
  • FIG. 42 shows a second outdoor photograph.
  • FIG. 43 is a first diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 43 shows the result of projecting the result of segmentation on the image of FIG. 42 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 44 is a second diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 44 shows that the shape of the road can be appropriately estimated using the observed values in the VLS plane.
  • FIG. 45 is a third diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 45 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 46 is a diagram showing the experimental environment of the second experiment conducted indoors in the first modified example.
  • FIG. 46 shows a first indoor photograph.
  • FIG. 47 is a first diagram showing the results of a second experiment conducted indoors in the first modified example.
  • FIG. 47 shows the result of projecting the result of segmentation on the image of FIG. 46 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 48 is a second diagram showing the results of the second experiment conducted indoors in the first modified example.
  • FIG. 48 shows that the shape of the road can be properly estimated using the observed values in the VLS plane.
  • FIG. 49 is a third diagram showing the results of the second experiment conducted indoors in the first modified example.
  • FIG. 49 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 50 is a diagram showing the experimental environment of the second experiment performed indoors in the modified example.
  • FIG. 50 shows a second indoor photograph.
  • FIG. 51 is a first diagram showing the results of a second experiment conducted indoors in a modified example.
  • FIG. 51 shows the result of projecting the result of segmentation on the image of FIG. 50 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 52 is a second diagram showing the results of a second experiment conducted indoors in the second modified example.
  • FIG. 52 shows that the shape of the road can be appropriately estimated by using the observed values in the VLS plane.
  • FIG. 53 is a third diagram showing the results of the second experiment conducted indoors in the second modified example.
  • FIG. 53 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 54 is a diagram showing the experimental environment of the third experiment in the modified example.
  • FIG. 54 shows a photograph of the place where the third experiment was performed.
  • 2DLiDAR was used as the photographing apparatus 902.
  • FIG. 55 is the first diagram showing the results of the third experiment in the modified example.
  • FIG. 55 shows that the shape of the road can be properly estimated using the observed values in the VLS plane. This is because there is a wall at the boundary of the road area.
  • FIG. 56 is a second diagram showing the results of the third experiment in the modified example.
  • FIG. 56 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values. This is because there is a wall at the boundary of the road area.
  • FIG. 57 is a first diagram showing the accuracy of the measurement results of the inclination and the road width obtained based on the experimental results of the first experiment to the second experiment.
  • FIG. 57 shows an average error of 7.08 degrees for the slope, an average error of 0.670 meters for the left road width, and an average of 0.634 meters for the right road width in the case of an outdoor experiment. Indicates that there was an error.
  • FIG. 57 shows an indoor experiment with an average error of 6.41 degrees for tilt, an average error of 0.363 meters for the left road width, and an average of 0.356 meters for the right road width. Indicates that there was an error.
  • FIG. 58 is a second diagram showing the accuracy of the measurement results of the inclination and the road width obtained based on the experimental results of the first experiment to the second experiment.
  • FIG. 58 is a result of standardizing the result of FIG. 57.
  • the inclination of the standard used for standardization was -60 to 60 degrees, and the standard road width was 4.0 meters in the outdoor experiment and 1.92 meters in the indoor experiment. ..
  • FIG. 58 shows an outdoor experiment with a 5.9 percent error rate for tilt, a 17.6 percent error rate for the left road width, and a 16.8 percent error rate for the right road width. Indicates that it was a percentage.
  • FIG. 58 shows an indoor experiment with an error rate of 5.34 percent for tilt, a 9.57 percent error rate for the left road width, and a 9.47 percent error rate for the right road width. Indicates that it was a percentage.
  • FIG. 59 is a diagram showing the experimental environment of the control experiment.
  • 2DLiDAR was used as the imaging device 902.
  • FIG. 59 is an image of a photograph showing the experimental environment of the control experiment. As shown in FIG. 59, the control experiment was performed in an outdoor environment similar to the first experiment performed outdoors.
  • FIG. 60 is a diagram showing an example of the experimental results of the control experiment.
  • the horizontal axis of the figure indicates the viewing angle [°], and the vertical axis of the figure indicates the distance.
  • the graph in the figure is the result of measurement by 2DLiDAR in the control experiment. The measurement result in the area where no reflector was present was recorded as 0 meters.
  • the area boundary distance information does not necessarily have to be acquired from the photographing apparatus 902.
  • the area boundary distance information may be acquired from an information processing device that is communicably connected via a network such as a management device such as a server on the network.
  • the image to be processed does not necessarily have to be acquired from the photographing apparatus 902.
  • the image to be processed may be acquired from an information processing device that is communicably connected via a network such as a management device such as a server on the network.
  • the autonomous mobile control device 1 may be mounted by using a plurality of information processing devices that are communicably connected via a network.
  • each functional unit included in the autonomous mobile control device 1 may be distributed and mounted in a plurality of information processing devices.
  • the program may be recorded on a computer-readable recording medium.
  • the computer-readable recording medium is, for example, a flexible disk, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, or a storage device such as a hard disk built in a computer system.
  • the program may be transmitted over a telecommunication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

An embodiment of the present invention is an autonomous moving body control device comprising: a region boundary distance information acquisition unit for acquiring region boundary distance information, which is information indicating the distance from an autonomous moving body subject to control to each position on the boundary of a subject region, which is a region in which the autonomous moving body is positioned; and a target information acquisition unit for acquiring target information, which is information indicating the relationship between the autonomous moving body and the subject region, on the basis of the region boundary distance information and reference region information indicating candidates for the shape, the orientation, and the position of the subject region.

Description

自律移動体制御装置、自律移動体制御方法及びプログラムAutonomous mobile control device, autonomous mobile control method and program
 本発明は、自律移動体制御装置、自律移動体制御方法及びプログラムに関する。 The present invention relates to an autonomous mobile control device, an autonomous mobile control method, and a program.
 自律して移動するロボット等の自律移動体の移動の精度を向上させる技術の研究開発が盛んである。このような研究開発では、例えば3DLiDAR(3 dimensional Light Detection And Ranging)を用いて移動の精度を向上させる試みや単眼カメラを用いて移動の精度を向上させる試みが行われている。このような試みとして台形形状の関心領域内を設定し、その領域内の道路面からの反射光を検出して道路端を検出する試みが行われている(特許文献1参照)。 Research and development of technology to improve the accuracy of movement of autonomous moving objects such as robots that move autonomously is active. In such research and development, for example, an attempt is made to improve the accuracy of movement by using 3DLiDAR (3dimensional Light Detection And Ringing) and an attempt to improve the accuracy of movement by using a monocular camera. As such an attempt, an attempt has been made to set a trapezoidal shape in the region of interest and detect the reflected light from the road surface in the region to detect the road edge (see Patent Document 1).
特開2011-118889号公報Japanese Unexamined Patent Publication No. 2011-118888 特開2020-154751号公報Japanese Unexamined Patent Publication No. 2020-154751
 しかしながら、従来の技術では、交差点等の道の方向と自律移動体の進行方向のずれや交差点等のランドマークと自律移動体の相対的な位置関係を見積もることが難しく、自律移動体が適切に移動できない場合があった。また、このようなことは交差点に限らず、歩行者が多数存在する状況での走行が行われる等の予め予測できない場面で生じることがあった。 However, with the conventional technology, it is difficult to estimate the relative positional relationship between the road direction such as an intersection and the traveling direction of the autonomous moving body, the landmark such as the intersection, and the autonomous moving body, and the autonomous moving body is appropriate. Sometimes I couldn't move. In addition, such a situation may occur not only at an intersection but also in an unpredictable situation such as running in a situation where a large number of pedestrians exist.
 上記事情に鑑み、本発明は、自律移動体の移動の精度を向上させる技術を提供することを目的としている。 In view of the above circumstances, an object of the present invention is to provide a technique for improving the accuracy of movement of an autonomous moving body.
 本発明の一態様は、制御対象の自律移動体から前記自律移動体が位置する領域である対象領域の境界上の各位置までの距離を示す情報である領域境界距離情報を取得する領域境界距離情報取得部と、前記対象領域の位置、向き及び形状の候補を示す参照領域情報と前記領域境界距離情報とに基づき、前記自律移動体と前記対象領域との関係を示す情報である目的情報を取得する目的情報取得部と、を備える自律移動体制御装置である。 One aspect of the present invention is a region boundary distance for acquiring area boundary distance information, which is information indicating a distance from an autonomous moving body to be controlled to each position on the boundary of a target area, which is a region where the autonomous moving body is located. Based on the information acquisition unit, the reference area information indicating the position, orientation, and shape candidate of the target area, and the area boundary distance information, the target information which is the information indicating the relationship between the autonomous moving body and the target area is obtained. It is an autonomous moving body control device including an object information acquisition unit to be acquired.
 本発明の一態様は、上記の自律移動制御装置であって、前記目的情報取得部は、前記参照領域情報を表現する写像のグラフと前記領域境界距離情報を表現する写像のグラフとの違いである誤差を最小にする条件を決定する処理を実行し、実行結果の条件に基づき前記目的情報を取得する。 One aspect of the present invention is the autonomous movement control device, wherein the target information acquisition unit is different from a map graph expressing the reference area information and a map graph expressing the area boundary distance information. The process of determining the condition for minimizing a certain error is executed, and the target information is acquired based on the condition of the execution result.
 本発明の一態様は、上記の自律移動制御装置であって、前記参照領域情報は、少なくとも前記自律移動体から見た前記対象領域の状態を表すパラメータに基づいて変化する。 One aspect of the present invention is the autonomous movement control device, wherein the reference area information changes based on at least a parameter representing the state of the target area as seen from the autonomous moving body.
 本発明の一態様は、上記の自律移動制御装置であって、前記参照領域情報は、前記対象領域の境界のうち、前記自律移動体に並走し前記自律移動体の向きを向く撮影装置によって撮影されない境界の位置を示す情報を含む。 One aspect of the present invention is the autonomous movement control device, wherein the reference area information is obtained by a photographing device that runs parallel to the autonomous moving body and faces the direction of the autonomous moving body in the boundary of the target area. Contains information indicating the location of unphotographed boundaries.
 本発明の一態様は、上記の自律移動制御装置であって、前記目的情報取得部は、前記参照領域情報と前記領域境界距離情報との違いである誤差を最小にする条件を決定する誤差最小化処理を実行することで前記目的情報を取得し、前記誤差最小化処理は、前記対象領域の位置、向き及び形状を表す関数であって1又は複数のパラメータを有する関数である参照領域表現関数を1又は複数用いた前記参照領域情報を用いる。 One aspect of the present invention is the autonomous movement control device, wherein the target information acquisition unit determines a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information. The target information is acquired by executing the conversion process, and the error minimization process is a reference area expression function which is a function representing the position, orientation, and shape of the target area and has one or a plurality of parameters. The reference area information using one or more of the above is used.
 本発明の一態様は、上記の自律移動制御装置であって、前記領域境界距離情報取得部は、動的障害物の情報を削除したうえで、前記領域境界距離情報を取得し、
 前記目的情報取得部は、前記参照領域情報と前記領域境界距離情報との違いである誤差を最小にする条件を決定する誤差最小化処理を実行することで前記目的情報を取得する。
One aspect of the present invention is the above-mentioned autonomous movement control device, in which the area boundary distance information acquisition unit acquires the area boundary distance information after deleting the information of the dynamic obstacle.
The target information acquisition unit acquires the target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information.
 本発明の一態様は、制御対象の自律移動体から前記自律移動体が位置する領域である対象領域の位置、向き及び形状の候補を示す参照領域情報と前記対象領域の境界上の各位置までの距離を示す情報である領域境界距離情報とに基づき、前記自律移動体と前記対象領域との関係を示す情報である目的情報を取得する目的情報取得ステップ、を有する自律移動体制御方法である。 In one aspect of the present invention, from the autonomous moving body to be controlled to the reference area information indicating the position, orientation, and shape candidate of the target area, which is the area where the autonomous moving body is located, and each position on the boundary of the target area. It is an autonomous moving body control method having a target information acquisition step of acquiring target information which is information indicating the relationship between the autonomous moving body and the target area based on the area boundary distance information which is the information indicating the distance of the above. ..
 本発明の一態様は、上記の自律移動体制御装置としてコンピュータを機能させるためのプログラムである。 One aspect of the present invention is a program for operating a computer as the above-mentioned autonomous mobile control device.
 本発明により、自律移動体の移動の精度を向上させることが可能となる。 According to the present invention, it is possible to improve the accuracy of movement of an autonomous moving body.
実施形態の自律移動体制御装置1の概要を説明する説明図。An explanatory diagram illustrating an outline of the autonomous mobile control device 1 of the embodiment. 実施形態における処理対象画像の一例を示す図。The figure which shows an example of the processing target image in an embodiment. 実施形態における領域分割処理の結果の一例を示す図。The figure which shows an example of the result of the area division processing in an embodiment. 実施形態における距離画像の一例を示す図。The figure which shows an example of the distance image in an embodiment. 実施形態におけるVLS平面に図3に示すセグメンテーションの結果を投影した結果の一例を示す図。The figure which shows an example of the result of projecting the result of the segmentation shown in FIG. 3 on the VLS plane in an embodiment. 実施形態における誤差最小化処理の結果の一例を示す図。The figure which shows an example of the result of the error minimization processing in an embodiment. 実施形態における目的情報の一例を示す図。The figure which shows an example of the objective information in an embodiment. 実施形態における、視野外境界の一例を示す図。The figure which shows an example of the out-of-field boundary in an embodiment. 実施形態における、交差点がある場合の誤差最小処理の結果の一例を示す図。The figure which shows an example of the result of the error minimum processing when there is an intersection in an embodiment. 実施形態における交差点のセグメンテーションの結果の一例を示す図。The figure which shows an example of the result of the segmentation of the intersection in an embodiment. 実施形態における、交差点におけるセグメンテーションの結果を俯瞰図に射影した結果と、視野境界距離とを示す結果の一例の図。FIG. 6 is an example of a result showing a bird's-eye view of the result of segmentation at an intersection and a result showing a visual field boundary distance in the embodiment. 実施形態の自律移動体制御装置1の機能構成の一例を示す図。The figure which shows an example of the functional structure of the autonomous mobile body control device 1 of embodiment. 実施形態における制御部10の機能構成の一例を示す図。The figure which shows an example of the functional structure of the control part 10 in an embodiment. 実施形態の自律移動体制御装置1が実行する処理の流れの一例を示す図。The figure which shows an example of the flow of the process executed by the autonomous mobile body control apparatus 1 of embodiment. 実施形態における領域境界距離情報取得部102が領域境界距離情報を取得する処理の流れの一例を示すフローチャート。The flowchart which shows an example of the flow of the process which the area boundary distance information acquisition part 102 acquires the area boundary distance information in embodiment. 実施形態における自律移動体9の移動体本体905と撮影装置902と水平面との関係の一例を説明する説明図。An explanatory diagram illustrating an example of the relationship between the moving body main body 905 of the autonomous moving body 9, the photographing device 902, and the horizontal plane in the embodiment. 実施形態における距離を表す式に用いられるパラメータの一例を示す図。The figure which shows an example of the parameter used in the formula which expresses a distance in an embodiment. 実施形態における形状が直線の道の形状を定式化するために用いられるパラメータを説明する説明図。An explanatory diagram illustrating parameters used to formulate the shape of a road whose shape is straight in the embodiment. 実施形態における道の形状が直線である場合にVirtual Lidarの中心から発信される信号の伝搬の様子の一例を示す図。The figure which shows an example of the state of propagation of the signal transmitted from the center of a virtual lidar when the shape of a road in an embodiment is a straight line. 実施形態における形状がカーブの道の形状を定式化するために用いられるパラメータを説明する説明図。An explanatory diagram illustrating parameters used to formulate the shape of a road whose shape is a curve in an embodiment. 実施形態における第2距離式の定式化のために用いられる補助点の一例を示す第1の図。The first figure which shows an example of the auxiliary point used for the formulation of the 2nd distance equation in an embodiment. 実施形態における第2距離式の定式化のために用いられる補助点の一例を示す第2の図。The second figure which shows an example of the auxiliary point used for the formulation of the 2nd distance equation in an embodiment. 実施形態におけるト字の道の形状の一例を示す図。The figure which shows an example of the shape of the T-shaped road in an embodiment. 実施形態における第3距離式の定式化のために用いられる補助点の一例を示す図。The figure which shows an example of the auxiliary point used for the formulation of the 3rd distance equation in an embodiment. 実施形態における俯瞰画像とVLS平面におけるVirtual Lidarの中心から境界までの距離との一例を示す第1の図。The first figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane. 実施形態における直線、右カーブ、左カーブの式で推定した結果の一例を示す第1の図。The first figure which shows an example of the result estimated by the formula of a straight line, a right curve, and a left curve in an embodiment. 実施形態における俯瞰画像とVLS平面におけるVirtual Lidarの中心から境界までの距離との一例を示す第2の図。The second figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane. 実施形態における直線、右カーブ、左カーブの式で推定した結果の一例を示す第2の図。The second figure which shows an example of the result estimated by the formula of a straight line, a right curve, and a left curve in an embodiment. 実施形態における俯瞰画像とVLS平面におけるVirtual Lidarの中心から境界までの距離との一例を示す第1の図。The first figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane. 実施形態における直線、右ト字、左ト字、丁字路の式で推定した結果の一例を示す図。The figure which shows an example of the result estimated by the formula of a straight line, a right character, a left character, and a junction in an embodiment. 実施形態における俯瞰画像とVLS平面におけるVirtual Lidarの中心から境界までの距離との一例を示す図。The figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane. 実施形態における直線、右ト字、左ト字、丁字路の式で推定した結果の一例を示す図。The figure which shows an example of the result estimated by the formula of a straight line, a right character, a left character, and a junction in an embodiment. 実施形態における俯瞰画像とVLS平面におけるVirtual Lidarの中心から境界までの距離との一例を示す図。The figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane. 実施形態における直線、右ト字、左ト字、丁字路の式で推定した結果の一例を示す図。The figure which shows an example of the result estimated by the formula of a straight line, a right character, a left character, and a junction in an embodiment. 変形例における被写体の一部が動的障害物である場合に、自律移動体制御装置1による動的障害物を無視する誤差最小化処理の実行結果の一例を示す図。The figure which shows an example of the execution result of the error minimization processing which ignores a dynamic obstacle by an autonomous mobile body control apparatus 1 when a part of a subject in a modification is a dynamic obstacle. 変形例における被写体の一部が動的障害物である場合に、自律移動体制御装置1が実行する処理の流れの一例を示すフローチャート。The flowchart which shows an example of the flow of the process which the autonomous mobile body control apparatus 1 executes when a part of a subject in a modification is a dynamic obstacle. 変形例におけるセグメンテーションモデルの生成の流れの一例を示すフローチャート。A flowchart showing an example of the flow of generating a segmentation model in a modified example. 変形例における第1の屋外で行われた第1実験の実験環境を示す図。The figure which shows the experimental environment of the 1st experiment performed outdoors in the modified example. 変形例における第1の屋外で行われた第1実験の結果を示す第1の図。The first figure which shows the result of the 1st experiment performed in the 1st outdoor in the modified example. 変形例における第1の屋外で行われた第1実験の結果を示す第2の図。The second figure which shows the result of the 1st experiment performed in the 1st outdoor in the modified example. 変形例における第1の屋外で行われた第1実験の結果を示す第3の図。FIG. 3 shows the result of the first experiment conducted outdoors in the modified example. 変形例における第2の屋外で行われた第1実験の実験環境を示す図。The figure which shows the experimental environment of the 2nd experiment performed outdoors in the modified example. 変形例における第2の屋外で行われた第1実験の結果を示す第1の図。The first figure which shows the result of the 1st experiment performed 2nd outdoors in the modified example. 変形例における第2の屋外で行われた第1実験の結果を示す第2の図。The second figure which shows the result of the 1st experiment performed 2nd outdoors in the modified example. 変形例における第2の屋外で行われた第1実験の結果を示す第3の図。The third figure which shows the result of the 1st experiment performed 2nd outdoors in the modified example. 変形例における第1の屋内で行われた第2実験の実験環境を示す図。The figure which shows the experimental environment of the 2nd experiment performed in the 1st room in the modified example. 変形例における第1の屋内で行われた第2実験の結果を示す第1の図。The first figure which shows the result of the 2nd experiment performed in the 1st room in the modified example. 変形例における第1の屋内で行われた第2実験の結果を示す第2の図。The second figure which shows the result of the 2nd experiment performed in the 1st indoor in the modified example. 変形例における第1の屋内で行われた第2実験の結果を示す第3の図。The third figure which shows the result of the 2nd experiment performed in the 1st room in the modified example. 変形例における第2の屋内で行われた第2実験の実験環境を示す図。The figure which shows the experimental environment of the 2nd experiment performed in the 2nd indoor in the modified example. 変形例における第2の屋内で行われた第2実験の結果を示す第1の図。The first figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modification. 変形例における第2の屋内で行われた第2実験の結果を示す第2の図。The second figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modification. 変形例における第2の屋内で行われた第2実験の結果を示す第3の図。The third figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modified example. 変形例における第3実験の実験環境を示す図。The figure which shows the experimental environment of the 3rd experiment in the modification. 変形例における第3実験の結果を示す第1の図。The first figure which shows the result of the 3rd experiment in the modification. 変形例における第3実験の結果を示す第2の図。The second figure which shows the result of the 3rd experiment in the modification. 第1実験から第3実験の実験結果に基づいて得られた傾き及び道幅の測定結果の精度を示す第1の図。The first figure which shows the accuracy of the measurement result of the inclination and the road width obtained based on the experimental result of the 1st experiment to the 3rd experiment. 第1実験から第3実験の実験結果に基づいて得られた傾き及び道幅の測定結果の精度を示す第2の図。The second figure which shows the accuracy of the measurement result of the inclination and the road width obtained based on the experimental result of the 1st experiment to the 3rd experiment. 対照実験の実験環境を示す図。The figure which shows the experimental environment of a control experiment. 対照実験の実験結果を示す図。The figure which shows the experimental result of the control experiment.
(実施形態)
 図1は、実施形態の自律移動体制御装置1の概要を説明する説明図である。自律移動体制御装置1は、制御対象の自律移動体9の移動を制御する。自律移動体9は、自律して移動するロボットや自動車等の自律して移動する移動体である。自律移動体制御装置1は、具体的には自律移動体9の位置ごとに領域境界距離情報を取得し、参照領域情報と取得した領域境界距離情報とに基づき自律移動体9と対象領域との関係を示す情報(以下「目的情報」という。)を取得する。
(Embodiment)
FIG. 1 is an explanatory diagram illustrating an outline of the autonomous mobile control device 1 of the embodiment. The autonomous mobile body control device 1 controls the movement of the autonomous mobile body 9 to be controlled. The autonomous moving body 9 is a moving body that moves autonomously, such as a robot or an automobile that moves autonomously. Specifically, the autonomous mobile body control device 1 acquires area boundary distance information for each position of the autonomous mobile body 9, and based on the reference area information and the acquired area boundary distance information, the autonomous mobile body 9 and the target area Acquire information indicating the relationship (hereinafter referred to as "purpose information").
 対象領域とは、自律移動体9が位置する空間内の領域を意味する。領域は、空間内の領域を意味する。領域は、例えば自律移動体9が進行可能な道である。図1において道900は対象領域の一例である。図1において、矢印901は自律移動体9が進む方向を示す。図1において、撮影装置902は、自律移動体9に並走し自律移動体9と同じ向きを向く撮影装置である。撮影装置902は、例えば3DLiDAR(3 dimensional Light Detection And Ranging)である。 The target area means an area in the space where the autonomous mobile body 9 is located. Region means a region in space. The region is, for example, a path on which the autonomous mobile 9 can travel. In FIG. 1, the road 900 is an example of the target area. In FIG. 1, the arrow 901 indicates the direction in which the autonomous mobile body 9 advances. In FIG. 1, the photographing device 902 is a photographing device that runs in parallel with the autonomous moving body 9 and faces the same direction as the autonomous moving body 9. The photographing apparatus 902 is, for example, 3DLiDAR (3dimensional Light Detection And Ringing).
 撮影装置902は、単眼カメラであってもよい。撮影装置902は自律移動体9が備えてもよいし、自律移動体9に並走するドローン等の他の移動体が備えてもよい。図1では、撮影装置902を自律移動体9が備える場合を、例として示している。撮影装置902は自律移動体9に並走し自律移動体9の向きを向くため、自律移動体9から見た方向は撮影装置902から見た方向である。また、撮影装置902は、自律移動体9に並走し自律移動体9と同じ位置または一定の距離にいるため、自律移動体9から見た対象までの距離は撮影装置902から見た対象までの距離である。 The photographing device 902 may be a monocular camera. The photographing device 902 may be provided by the autonomous moving body 9, or may be provided by another moving body such as a drone running in parallel with the autonomous moving body 9. FIG. 1 shows a case where the autonomous moving body 9 includes a photographing device 902 as an example. Since the photographing device 902 runs in parallel with the autonomous moving body 9 and faces the direction of the autonomous moving body 9, the direction seen from the autonomous moving body 9 is the direction seen from the photographing device 902. Further, since the photographing device 902 runs in parallel with the autonomous moving body 9 and is at the same position or a certain distance as the autonomous moving body 9, the distance from the autonomous moving body 9 to the object is to the object seen from the photographing device 902. Is the distance.
 目的情報は、例えば対象領域の領域幅内のどこに自律移動体9が位置するかを示す。目的情報は、例えば対象領域の方向と自律移動体9の向きとの関係を示す。目的情報は、例えば自律移動体9の位置において対象領域が交差点であるか否かを示す。目的情報は、例えば自律移動体9の位置において対象領域が交差点である場合に、交差点で交差する各領域の方向を示す。 The target information indicates, for example, where the autonomous mobile body 9 is located within the area width of the target area. The target information indicates, for example, the relationship between the direction of the target area and the direction of the autonomous mobile body 9. The target information indicates whether or not the target area is an intersection at the position of the autonomous mobile body 9, for example. The target information indicates the direction of each region that intersects at the intersection, for example, when the target region is an intersection at the position of the autonomous mobile body 9.
 領域境界距離情報は、自律移動体9から対象領域の境界上の各位置までの距離(以下「領域境界距離」という。)を示す情報である。領域境界距離情報は、例えば自律移動体9を中心にして自律移動体9から見た各方向(以下「視線方向」という。)について、自律移動体9から各視線方向上の道の境界に位置する被写体までの距離を領域境界距離として示す情報である。領域境界距離情報は、例えば視線方向ごとに領域境界距離を示すグラフとして表示される情報である。なお、被写体は、例えば遮蔽物である。 The area boundary distance information is information indicating the distance from the autonomous moving body 9 to each position on the boundary of the target area (hereinafter referred to as "region boundary distance"). The area boundary distance information is located, for example, at the boundary of the road from the autonomous moving body 9 in each direction seen from the autonomous moving body 9 (hereinafter referred to as "line-of-sight direction") centered on the autonomous moving body 9. This is information indicating the distance to the subject as the area boundary distance. The area boundary distance information is information displayed as a graph showing the area boundary distance for each line-of-sight direction, for example. The subject is, for example, a shield.
 領域境界距離情報は、例えば撮影装置902が3DLiDAR(3 dimensional Light Detection And Ranging)である場合には3DLiDARによる測定結果である。すなわち、撮影装置902が3DLiDARである場合、領域境界距離はシグナルの距離である。撮影装置902が3DLiDARである場合、視線方向は3DLiDARから見た方向である。 The area boundary distance information is, for example, a measurement result by 3DLiDAR when the photographing device 902 is 3DLiDAR (3dimensional Light Detection And Ringing). That is, when the photographing apparatus 902 is a 3D LiDAR, the region boundary distance is the distance of the signal. When the photographing apparatus 902 is a 3D LiDAR, the line-of-sight direction is the direction seen from the 3D LiDAR.
 領域境界距離情報は、例えば自律移動体9に備え付けられた単眼カメラによる撮影画像に基づき、予め得られた距離画像と予め学習済みの機械学習の結果とを用いた演算により取得されてもよい。距離画像は、例えば水平面を撮影した結果である。距離画像は、実際に観測したデータだけでなく、カメラの内部パラメータに基づき計算された結果であってもよい。 The area boundary distance information may be acquired by calculation using a distance image obtained in advance and a machine learning result learned in advance, based on an image taken by a monocular camera provided in the autonomous moving body 9, for example. The distance image is, for example, the result of photographing a horizontal plane. The distance image may be a result calculated based on the internal parameters of the camera as well as the actually observed data.
 撮影装置902が単眼カメラである場合には、領域境界距離情報は撮影装置902による撮影結果に基づき、例えば自律移動体制御装置1が予め得られた距離画像と予め学習済みの機械学習の結果とを用いて演算により取得した結果である。 When the photographing device 902 is a monocular camera, the area boundary distance information is based on the imaged result by the photographing device 902, for example, the distance image obtained in advance by the autonomous moving body control device 1 and the result of machine learning learned in advance. This is the result obtained by calculation using.
 予め学習済みの機械学習の結果は、具体的には、セマンティックセグメンテーション等の単眼カメラの撮影結果から自律移動体9が移動可能な領域を示す画素を決定する処理である。撮影装置902が単眼カメラである場合、視線方向は単眼カメラから見た方向である。 The result of machine learning that has been learned in advance is, specifically, a process of determining a pixel indicating a region in which the autonomous moving body 9 can move from the shooting result of a monocular camera such as semantic segmentation. When the photographing apparatus 902 is a monocular camera, the line-of-sight direction is the direction seen from the monocular camera.
 参照領域情報は、対象領域の位置、向き及び形状の候補を表す情報である。参照領域情報は、具体的には対象領域の位置、向き及び形状を表す関数であって1又は複数のパラメータを有する関数(以下「参照領域表現関数」という。)である。すなわち、参照領域情報は、具体的には対象領域の位置、向き及び形状の候補を表す数理モデルである。パラメータは、対象領域の形状を表す関数の形を定め、例えば対象領域の幅等の対象領域の形状に関するパラメータである。参照領域表現関数は、例えば領域境界距離と視線角度との対応関係を示す関数であって1又は複数のパラメータを含む関数である。視線角度は、所定の面内における各視線方向を示す角度である。 Reference area information is information that represents candidates for the position, orientation, and shape of the target area. The reference area information is specifically a function representing the position, orientation, and shape of the target area and having one or a plurality of parameters (hereinafter referred to as "reference area expression function"). That is, the reference area information is specifically a mathematical model that represents a candidate for the position, orientation, and shape of the target area. The parameter defines the shape of the function representing the shape of the target area, and is a parameter related to the shape of the target area such as the width of the target area. The reference area representation function is, for example, a function showing the correspondence between the area boundary distance and the line-of-sight angle, and is a function including one or a plurality of parameters. The line-of-sight angle is an angle indicating each line-of-sight direction in a predetermined plane.
 パラメータは、少なくとも自律移動体9から見た対象領域の状態を表すパラメータを含む。自律移動体制御装置1による目的情報の取得に際してパラメータが用いられる際には、自律移動体制御装置1は参照領域表現関数を、少なくとも自律移動体9から見た対象領域の状態を表すパラメータに基づいて変化させる。すなわち、参照領域情報は少なくとも自律移動体9から見た対象領域の状態を表すパラメータに基づいて変化する。自律移動体9から見た対象領域の状態は、例えば自律移動体9から見た対象領域の傾き、幅や長さである。 The parameters include at least a parameter representing the state of the target area as seen from the autonomous mobile body 9. When the parameter is used when the target information is acquired by the autonomous mobile control device 1, the autonomous mobile control device 1 uses a reference area expression function based on a parameter representing at least the state of the target area as seen from the autonomous mobile body 9. To change. That is, the reference area information changes at least based on the parameter representing the state of the target area as seen from the autonomous mobile body 9. The state of the target area seen from the autonomous moving body 9 is, for example, the inclination, width, and length of the target area seen from the autonomous moving body 9.
 なお、参照領域表現関数は、分岐の無い1つの対象領域だけを表す必要は無く、分岐のある領域を表してもよい。参照領域表現関数は、例えば領域が道である場合には、1本の道を表してもよいし分岐した道を分岐も含めて表してもよい。以下説明の簡単のため、参照領域表現関数が1本の道を表す場合を例に自律移動体制御装置1を説明する。なお、図1において画像903は参照領域情報が表す形状の一例を鳥瞰図上に表現した図形である。なお、画像903に示す図形では、平行四辺形の下側の頂点付近には撮影装置902の視野に応じた切り欠きが存在する。 Note that the reference area expression function does not have to represent only one target area without a branch, and may represent an area with a branch. For example, when the area is a road, the reference area representation function may represent one road or a branched road including a branch. Hereinafter, for the sake of simplicity, the autonomous mobile control device 1 will be described by taking the case where the reference area representation function represents one path as an example. In FIG. 1, the image 903 is a figure represented on a bird's-eye view as an example of the shape represented by the reference area information. In the figure shown in the image 903, there is a notch corresponding to the field of view of the photographing apparatus 902 in the vicinity of the lower apex of the parallelogram.
 自律移動体制御装置1が、参照領域情報と領域境界距離情報とに基づき目的情報を取得する処理(以下「抽出処理」という。)は、誤差最小化処理と目的情報取得処理とを含むことが望ましい。抽出処理では、誤差最小化処理の実行後に目的情報取得処理が実行される。 The process of acquiring the target information based on the reference area information and the area boundary distance information (hereinafter referred to as "extraction process") by the autonomous mobile control device 1 may include an error minimization process and a target information acquisition process. desirable. In the extraction process, the target information acquisition process is executed after the error minimization process is executed.
 誤差最小化処理は、参照領域情報を表現する写像のグラフと取得された領域境界距離情報を表現する写像のグラフとの違い(以下「誤差」という。)の最小値を与えるパラメータの値を決定する最適化の処理である。すなわち、誤差最小化処理は、誤差を最小にする条件を決定する処理である。以下、誤差最小化処理によって、誤差の最小値を与えるパラメータの値として決定された値を、被決定値という。以下、被決定値で特定される参照領域表現関数を被決定関数という。なお、写像のグラフとは、例えば、写像f:A→Bが与えられたときにb=f(a)が真になるような順序対であって集合A×Bの順序対(a、b)からなる集合、と定義されてもよい。例えば本明細書における写像のグラフとは、いわゆる一般的な写像のグラフの定義が用いられてもよい。 The error minimization process determines the value of the parameter that gives the minimum value of the difference between the graph of the map expressing the reference area information and the graph of the map expressing the acquired area boundary distance information (hereinafter referred to as "error"). It is an optimization process to be performed. That is, the error minimization process is a process of determining the condition for minimizing the error. Hereinafter, the value determined as the value of the parameter that gives the minimum value of the error by the error minimization process is referred to as a determined value. Hereinafter, the reference area expression function specified by the determined value is referred to as a determined function. The graph of the map is, for example, an ordered pair such that b = f (a) becomes true when the map f: A → B is given, and is an ordered pair (a, b) of the set A × B. ) May be defined as a set. For example, as the map graph in the present specification, the definition of a so-called general map graph may be used.
 目的情報取得処理は、被決定関数に基づき目的情報を取得する処理である。目的情報取得処理は、例えば被決定関数が示す2つのピーク位置を対象領域の道路端として取得する処理である。 The objective information acquisition process is a process of acquiring objective information based on the determined function. The purpose information acquisition process is, for example, a process of acquiring two peak positions indicated by a determined function as road edges in a target area.
 このように、抽出処理は、領域境界距離情報と参照領域情報とを用いた最適化処理を行うことで目的情報を取得する処理である。なお、図1において画像904は、誤差最小化処理の結果の一例を示す図である。画像904の詳細は、Virtual Lidar処理(以下「VLS処理」という。)の説明と誤差最小化処理の具体例の1つとを説明した後、図6を用いて説明する。 As described above, the extraction process is a process of acquiring the target information by performing the optimization process using the area boundary distance information and the reference area information. Note that image 904 in FIG. 1 is a diagram showing an example of the result of error minimization processing. The details of the image 904 will be described with reference to FIG. 6 after explaining the virtual lidar processing (hereinafter referred to as “VLS processing”) and one of the specific examples of the error minimizing processing.
 (VLS処理の詳細)
 VLS処理の詳細を撮影装置902が自律移動体9に備え付けられた単眼カメラである場合を例に説明する。VLS処理は、撮影装置902による撮影画像(以下「処理対象画像」という。)に基づき、予め得られた距離画像と予め学習済みの機械学習の結果とを用いた演算により領域境界距離情報を得る技術の一例の技術である。VLS処理は、自律移動体制御装置1によって用いられる技術である。VLS処理は、領域分割処理と、距離対応付け処理と、境界画素情報取得処理と、仮想空間内距離測定処理とを含む。VLS処理では、仮想空間内距離判定処理の実行前に、領域分割処理、距離対応付け処理及び境界面画素情報取得処理が実行され、境界面画素情報取得処理の実行前に領域分割処理が実行される。そのため、VLS処理では、例えば領域分割処理及び距離対応付け処理の実行の次に境界画素情報取得処理が実行され、その次に仮想空間距離測定処理が実行される。領域分割処理と距離対応付け処理との実行のタイミングはどちらが先であってもよいし、同時であってもよい。境界対応付け処理と境界画素情報取得処理との実行のタイミングはどちらが先であってもよいし、同時であってもよい。
(Details of VLS processing)
The details of the VLS processing will be described by exemplifying the case where the photographing apparatus 902 is a monocular camera provided in the autonomous mobile body 9. In the VLS processing, the area boundary distance information is obtained by calculation using the distance image obtained in advance and the result of machine learning learned in advance based on the image taken by the photographing apparatus 902 (hereinafter referred to as “processed image”). It is an example of technology. The VLS processing is a technique used by the autonomous mobile control device 1. The VLS process includes an area division process, a distance mapping process, a boundary pixel information acquisition process, and a distance measurement process in virtual space. In the VLS processing, the area division processing, the distance mapping processing, and the boundary surface pixel information acquisition processing are executed before the execution of the virtual space distance determination processing, and the area division processing is executed before the execution of the boundary surface pixel information acquisition processing. To. Therefore, in the VLS processing, for example, the boundary pixel information acquisition processing is executed after the execution of the area division processing and the distance mapping processing, and then the virtual space distance measurement processing is executed. The timing of execution of the area division process and the distance mapping process may be earlier or simultaneous. Either the boundary mapping process and the boundary pixel information acquisition process may be executed first or at the same time.
 領域分割処理は、処理対象画像の各画素について、セグメンテーション等の画像中の各領域を分類する予め学習済みの機械学習の結果を用いて、各画素に写る像がどのような領域であるかを自律移動体制御装置1が判定する処理である。自律移動体制御装置1は、領域分割処理の実行により、処理対象画像に写る各領域を他の領域と区別する情報(以下「区別情報」という。)を得る。例えば、自律移動体制御装置1による領域分割処理の実行により、処理対象画像に写る対象領域を他の領域と区別する情報が得られる。 In the area division processing, for each pixel of the image to be processed, what kind of area is the image reflected in each pixel is determined by using the result of machine learning that has been learned in advance to classify each area in the image such as segmentation. This is a process determined by the autonomous moving body control device 1. The autonomous mobile control device 1 obtains information (hereinafter referred to as "distinguishing information") for distinguishing each area reflected in the image to be processed from other areas by executing the area division processing. For example, by executing the area division processing by the autonomous mobile control device 1, information for distinguishing the target area reflected in the processing target image from other areas can be obtained.
 図2は、実施形態における処理対象画像の一例を示す図である。図2の画像には、対象領域として画像の右下から左上に向かう道が写っている。図2の画像において、対象領域である道の道路端の一方は、芝生との境界である。 FIG. 2 is a diagram showing an example of a processing target image in the embodiment. The image of FIG. 2 shows a road from the lower right to the upper left of the image as a target area. In the image of FIG. 2, one of the road ends of the road, which is the target area, is the boundary with the lawn.
 図3は、実施形態における領域分割処理の結果の一例を示す図である。より具体的には、図3は、処理対象画像に対するセグメンテーションの結果の一例を示す図である。図3では、処理対象画像に写る対象領域が処理対象画像に写る他の領域と区別して表現されている。 FIG. 3 is a diagram showing an example of the result of the area division processing in the embodiment. More specifically, FIG. 3 is a diagram showing an example of the result of segmentation for the image to be processed. In FIG. 3, the target area reflected in the processing target image is represented separately from other areas reflected in the processing target image.
 距離対応付け処理は、処理対象画像の各画素に対して予め対応付けられた画素であって距離画像の画素が示す距離の情報を、距離画像の各画素の対応先の処理対象画像の各画素の属性を示す情報として自律移動体制御装置1が取得する処理である。以下、距離画像の各画素の対応先の処理対象画像の各画素の属性を示す情報を、平面画素距離情報という。なお、距離対応付け処理では、処理対象画像に写る全ての像が水平面上にあるという仮定の下で平面画素距離情報が取得される。距離画像は、撮影装置902が見る景色の画像であるため、距離画像の各画素が示す距離は撮影装置902から距離画像の各画素が写す像までの距離を示す情報である。すなわち、距離画像は、撮影装置902が見る景色の画像であるため、距離画像の各画素が示す距離は自律移動体9から距離画像の各画素が写す像までの距離を示す情報である。 The distance mapping process is a pixel previously associated with each pixel of the processing target image, and the distance information indicated by the pixel of the distance image is obtained from each pixel of the processing target image to which each pixel of the distance image corresponds. This is a process acquired by the autonomous moving body control device 1 as information indicating the attribute of. Hereinafter, information indicating the attributes of each pixel of the processing target image to which each pixel of the distance image corresponds is referred to as plane pixel distance information. In the distance mapping process, the plane pixel distance information is acquired under the assumption that all the images reflected in the image to be processed are on the horizontal plane. Since the distance image is an image of the scenery seen by the photographing device 902, the distance indicated by each pixel of the distance image is information indicating the distance from the photographing device 902 to the image captured by each pixel of the distance image. That is, since the distance image is an image of the scenery seen by the photographing device 902, the distance indicated by each pixel of the distance image is information indicating the distance from the autonomous moving body 9 to the image captured by each pixel of the distance image.
 図4は、実施形態における距離画像の一例を示す図である。図4の距離画像は、視線が水平面に平行な撮影装置902が水平面を撮影した距離画像である。図4の距離画像において色は薄いほど撮影装置902との間の距離が短いことを意味する。 FIG. 4 is a diagram showing an example of a distance image in the embodiment. The distance image of FIG. 4 is a distance image obtained by photographing the horizontal plane by the photographing apparatus 902 whose line of sight is parallel to the horizontal plane. In the distance image of FIG. 4, the lighter the color, the shorter the distance to the photographing apparatus 902.
 境界画素情報取得処理は、区別情報に基づき、処理対象画像の画素のうち領域間の境界を写す画素を示す情報(以下「境界画素情報」という。)を自律移動体制御装置1が取得する処理である。 The boundary pixel information acquisition process is a process in which the autonomous moving body control device 1 acquires information indicating a pixel (hereinafter referred to as “boundary pixel information”) that represents a boundary between regions among the pixels of the image to be processed, based on the distinction information. Is.
 仮想空間内距離測定処理は、境界画素情報と平面画素距離情報とに基づき撮影装置902から領域画素情報が示す各画素が写す像までの距離を自律移動体制御装置1が取得する処理である。そのため、仮想空間内距離測定処理は、境界画素情報と平面画素距離情報とに基づき自律移動体制御装置1が領域境界距離情報を取得する処理である。 The distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the distance from the photographing device 902 to the image captured by each pixel indicated by the area pixel information based on the boundary pixel information and the plane pixel distance information. Therefore, the distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the area boundary distance information based on the boundary pixel information and the plane pixel distance information.
 仮想空間内距離測定処理は、例えばVLS平面において処理対象画像に写る領域の境界までの原点からの距離を自律移動体制御装置1が演算により取得する処理である。VLS平面は、自律移動体9の位置(すなわち撮影装置902の位置)を中心にした2次元の座標系を有する仮想空間であって、境界画素情報が示す各画素に写る像が平面画素距離情報の示す距離だけ自律移動体9から離れた位置に設置された仮想空間である。VLS平面における原点は、自律移動体9が位置する仮想空間内の位置である。VLS平面における測定とは、VLS平面において原点からLiDARの信号を飛ばし、そのLiDARの信号の散乱又は反射が原点に戻るまでの時間を算出し、算出した時間を距離に変換する処理を自律移動体制御装置1が演算により実行する処理である。そのため、VLS平面における測定の結果が領域境界距離情報の一例である。 The distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the distance from the origin to the boundary of the area reflected in the image to be processed on the VLS plane by calculation, for example. The VLS plane is a virtual space having a two-dimensional coordinate system centered on the position of the autonomous moving body 9 (that is, the position of the photographing device 902), and the image reflected in each pixel indicated by the boundary pixel information is the plane pixel distance information. It is a virtual space installed at a position away from the autonomous moving body 9 by the distance indicated by. The origin in the VLS plane is the position in the virtual space where the autonomous mobile body 9 is located. The measurement in the VLS plane is an autonomous moving object that skips the LiDAR signal from the origin in the VLS plane, calculates the time until the scattering or reflection of the LiDAR signal returns to the origin, and converts the calculated time into a distance. This is a process executed by the control device 1 by calculation. Therefore, the result of the measurement on the VLS plane is an example of the region boundary distance information.
 図5は、実施形態におけるVLS平面に図3に示すセグメンテーションの結果を投影した結果の一例を示す図である。図5の結果の横軸及び縦軸は、ガリレイ座標系の軸を示す。図5の横軸の値が0であって縦軸の値が0の位置は、撮影装置902の位置(すなわち自律移動体9の位置)である。図5の台形の領域A1の境界は、撮影装置902の視野角の境界を表す。 FIG. 5 is a diagram showing an example of the result of projecting the result of the segmentation shown in FIG. 3 on the VLS plane in the embodiment. The horizontal and vertical axes of the results of FIG. 5 indicate the axes of the Galileo coordinate system. The position where the value on the horizontal axis of FIG. 5 is 0 and the value on the vertical axis is 0 is the position of the photographing apparatus 902 (that is, the position of the autonomous moving body 9). The boundary of the trapezoidal region A1 in FIG. 5 represents the boundary of the viewing angle of the photographing apparatus 902.
(誤差最小化処理及び目的情報取得処理の具体例)
 撮影装置902が単眼カメラである場合を例に、誤差最小化処理及び目的情報取得処理の具体例を説明する。
(Specific examples of error minimization processing and purpose information acquisition processing)
A specific example of the error minimization process and the target information acquisition process will be described by taking the case where the photographing device 902 is a monocular camera as an example.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 VLSroadは、VLS平面における原点から参照領域情報の境界までの距離である。VLSparallelogramは、道路等の対象領域の形状を撮影装置902の視野の境界を考慮せずに平行四辺形等の所定の形状(以下「近似形状」という。)で近似する場合における原点から近似形状の境界までのVLS平面における原点からの距離である。近似形状は、より具体的には、対象領域の形状を撮影装置902の視野の境界を考慮せずに所定の形状で近似した形状であり、対象領域の形状を鳥瞰図上に表現した場合の形状を近似する図形である。近似形状は、例えば平行四辺形である。 VLSroad is the distance from the origin in the VLS plane to the boundary of the reference area information. The VLSparallelogram is an approximate shape from the origin when the shape of a target area such as a road is approximated by a predetermined shape such as a parallelogram (hereinafter referred to as "approximate shape") without considering the boundary of the field of view of the photographing apparatus 902. The distance from the origin in the VLS plane to the boundary. More specifically, the approximate shape is a shape that approximates the shape of the target area with a predetermined shape without considering the boundary of the field of view of the photographing apparatus 902, and is a shape when the shape of the target area is expressed on a bird's-eye view. It is a figure that approximates. The approximate shape is, for example, a parallelogram.
 VLSmapは、撮影装置902の視野の境界までのVLS平面における原点からの距離を表す。VLSmapは、単眼カメラの外部パラメータae.param、単眼カメラの内部パラメータai.param、地図の計測範囲arangeに基づいて算出される。単眼カメラの外部パラメータae.paramは具体的には、例えば単眼カメラの位置や姿勢である。単眼カメラの内部パラメータai.paramは具体的には、例えば単眼カメラの焦点距離や画像中心である。撮影装置902の視野の境界までのVLS平面における原点からの距離は、VLS平面における単眼カメラの位置acenterに依存する。 VLSmap represents the distance from the origin in the VLS plane to the boundary of the field of view of the photographing apparatus 902. VLSmap is an external parameter of a monocular camera, ae. param, internal parameters of monocular camera ai. It is calculated based on param and the measurement range range of the map. External parameters of the monocular camera ae. Specifically, param is, for example, the position and posture of a monocular camera. Internal parameters of the monocular camera ai. Specifically, param is, for example, the focal length of a monocular camera or the center of an image. The distance from the origin in the VLS plane to the boundary of the field of view of the photographing apparatus 902 depends on the position actor of the monocular camera in the VLS plane.
 VLSroadは、対象領域の傾きθangle、対象領域の長さθlength、領域の左の幅θl.width、領域の右の幅θr.width、に依存する。以下、説明の簡単のため、単眼カメラの内部パラメータae.paramをΘと表記し、単眼カメラの外部パラメータをAと表記し、視線角度をχと表記する。このような場合、VLSroadは、以下の式(2)~(4)で定式化される。 The VLSroad has a slope θangle of the target area, a length θlength of the target area, and a width θl on the left side of the area. width, right width θr. It depends on width. In the following, for the sake of simplicity, the internal parameters of the monocular camera ae. Param is expressed as Θ, the external parameter of the monocular camera is expressed as A, and the line-of-sight angle is expressed as χ. In such a case, VLSroad is formulated by the following equations (2) to (4).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 式(2)~(4)で定式化されるVLSroadは参照領域情報の一例である。誤差最小化処理ではVLSroadが含むパラメータの値を推定する処理が行われる。推定は例えば、最小二乗法により誤差を最小にする最適化によりパラメータの値の推定が行われる。このような場合、最小二乗法による最適化の結果が被決定関数である。 VLSroad formulated by the equations (2) to (4) is an example of reference area information. In the error minimization process, a process of estimating the value of the parameter included in the VLSroad is performed. For the estimation, for example, the parameter values are estimated by optimization that minimizes the error by the method of least squares. In such a case, the result of optimization by the least squares method is the coefficient of determination.
 二乗誤差の計算式の一例は以下の式(5)であり、パラメータを算出する式の一例は以下の式(6)である。 An example of the square error calculation formula is the following formula (5), and an example of the parameter calculation formula is the following formula (6).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 xiはVLS平面における視線角度を表し、yiは視線角度xiにおけるVLS平面における原点からの距離を表す。 Xi represents the line-of-sight angle in the VLS plane, and y represents the distance from the origin in the VLS plane at the line-of-sight angle xi.
 式(1)~式(6)を用いて誤差最小化処理を行った結果の一例を図6に示す。図6は、実施形態における誤差最小化処理の結果の一例を示す図である。図6の横軸は、視線角度を示す。図6の縦軸は、距離を表す。単位は距離の単位である。図6は、領域境界距離情報の一例と、誤差最小化処理の結果の一例とを示す。図6は誤差最小化処理の結果が、領域境界距離情報が示すグラフと高い精度で一致していることを示す。図6の結果は、視線角度140°と視線角度170°とにピークがあることを示す。視線角度140°と視線角度170°とはそれぞれ路肩等の領域の端を示す。そのため、2つのピークの中心を示す視線角度が対象領域の中心を示す角度である。 FIG. 6 shows an example of the result of error minimization processing using the equations (1) to (6). FIG. 6 is a diagram showing an example of the result of the error minimization processing in the embodiment. The horizontal axis of FIG. 6 indicates the line-of-sight angle. The vertical axis of FIG. 6 represents a distance. The unit is the unit of distance. FIG. 6 shows an example of the area boundary distance information and an example of the result of the error minimization processing. FIG. 6 shows that the result of the error minimization process matches the graph shown by the region boundary distance information with high accuracy. The results in FIG. 6 show that there are peaks at a line-of-sight angle of 140 ° and a line-of-sight angle of 170 °. The line-of-sight angle of 140 ° and the line-of-sight angle of 170 ° indicate the ends of regions such as the shoulder of the road, respectively. Therefore, the line-of-sight angle indicating the centers of the two peaks is the angle indicating the center of the target area.
 一般に道等の領域には両端が存在するため、誤差最小化処理の結果には、図6の結果が示すような2つのピークが1つの領域ごとに現れる。そのため目的情報取得処理では、誤差最小化処理の結果から、2つのピークの中心を示す視線角度が対象領域の中心を示す視線角度として取得される。このようにして、目的情報取得処理の実行により、対象領域の方向を示す情報が取得される。 In general, since both ends exist in a region such as a road, two peaks as shown in the result of FIG. 6 appear in each region in the result of error minimization processing. Therefore, in the target information acquisition process, the line-of-sight angle indicating the center of the two peaks is acquired as the line-of-sight angle indicating the center of the target region from the result of the error minimization process. In this way, by executing the target information acquisition process, information indicating the direction of the target area is acquired.
 図7は、実施形態における目的情報の一例を示す図である。図7の縦軸は、自律移動体9からの距離を表す。単位は距離の単位である。図7の横軸は、水平面内における各視線方向を示す。より具体的には、図7の横軸は、水平面内における各視線方向を示す角度(すなわち水平面内における視線角度)を示す。そのため、撮影装置902が3DLiDARである場合、図7の横軸は計測角度を示す。図7において自律移動体9の進行方向は180°の方向である。 FIG. 7 is a diagram showing an example of purpose information in the embodiment. The vertical axis of FIG. 7 represents the distance from the autonomous mobile body 9. The unit is the unit of distance. The horizontal axis of FIG. 7 indicates each line-of-sight direction in the horizontal plane. More specifically, the horizontal axis of FIG. 7 indicates an angle indicating each line-of-sight direction in the horizontal plane (that is, a line-of-sight angle in the horizontal plane). Therefore, when the photographing apparatus 902 is a 3D LiDAR, the horizontal axis in FIG. 7 indicates the measurement angle. In FIG. 7, the traveling direction of the autonomous moving body 9 is 180 °.
 図7において横軸の角度200°は、自律移動体9の進行方向からの角度が20°の、水平面内の方向である。図7は、理想的な条件での道路の境界と撮影装置902の視野の境界とを示している。より具体的には、図7において“視野の境界までの距離”は、視野の境界を示す情報であり、撮影装置902から撮影装置902の視野の境界までの距離(以下「視野境界距離」という。)を示す。なお“理想的な条件”とは、後述の図8に記載の各境界の情報(具体的には、図8における“視野の境界までの距離”、“参照領域表現関数が表す形状”、“視野外境界”)をVLS平面上の表現に変換したものが図7である、という条件を意味する。すなわち、図7は実施形態における目的情報の一例を示す図であって、図8に示す平面図に対応する。 In FIG. 7, the angle of 200 ° on the horizontal axis is the direction in the horizontal plane where the angle from the traveling direction of the autonomous moving body 9 is 20 °. FIG. 7 shows the boundary of the road under ideal conditions and the boundary of the field of view of the photographing apparatus 902. More specifically, in FIG. 7, the "distance to the boundary of the field of view" is information indicating the boundary of the field of view, and is the distance from the photographing device 902 to the boundary of the field of view of the photographing device 902 (hereinafter referred to as "field of view boundary distance"). .) Is shown. The "ideal conditions" are information on each boundary shown in FIG. 8 described later (specifically, "distance to the boundary of the visual field" in FIG. 8, "shape represented by the reference area expression function", and ". It means the condition that FIG. 7 is a conversion of the out-of-field boundary ") into a representation on the VLS plane. That is, FIG. 7 is a diagram showing an example of the target information in the embodiment, and corresponds to the plan view shown in FIG.
 図7において“視野の境界を考慮しないもの”は、横軸を視線角度、縦軸を距離とするグラフで平行四辺形等の近似形状の境界を表示した結果の一例である。図7において“目的情報の一例”は、目的情報取得処理の結果の一例であり、対象領域の方向を示す情報の一例である。図7は視線角度120°~260°の範囲で、対象領域の中心を示す図でもある。 In FIG. 7, "thing that does not consider the boundary of the visual field" is an example of the result of displaying the boundary of an approximate shape such as a parallelogram in a graph with the horizontal axis as the line-of-sight angle and the vertical axis as the distance. In FIG. 7, “an example of target information” is an example of the result of the target information acquisition process, and is an example of information indicating the direction of the target area. FIG. 7 is also a diagram showing the center of the target area in the range of the line-of-sight angle of 120 ° to 260 °.
 図7において“視野の境界を考慮したもの”は、横軸を視線角度、縦軸を距離とするグラフで、平行四辺形等の近似形状と視野の境界の情報とを用いて算出された参照領域表現関数を表示した結果の一例である。すなわち、“視野の境界を考慮したもの”は、参照領域表現関数を、VLS平面の直交座標系上の表現から視線角度と距離とを座標軸とする極座標系上の表現に座標変換した結果の一例である。視野の境界の情報を用いるとは、具体的には、近似形状が示す対象領域の境界のうち撮影装置902によって撮影されない境界(以下「視野外境界」という。)を示す情報を用いることを意味する。 In FIG. 7, "considering the boundary of the visual field" is a graph in which the horizontal axis is the line-of-sight angle and the vertical axis is the distance, and is a reference calculated using approximate shapes such as a parallelogram and information on the boundary of the visual field. This is an example of the result of displaying the area representation function. That is, "considering the boundary of the visual field" is an example of the result of coordinate conversion of the reference area expression function from the expression on the Cartesian coordinate system of the VLS plane to the expression on the polar coordinate system whose coordinate axes are the line-of-sight angle and the distance. Is. The use of information on the boundary of the visual field specifically means that information indicating the boundary of the target area indicated by the approximate shape that is not captured by the photographing apparatus 902 (hereinafter referred to as “out-of-field boundary”) is used. do.
 そのため図7において“視野の境界を考慮したもの”は、例えば平行四辺形等の近似形状の下側の頂点付近に撮影装置902の視野に応じた切り欠きが存在する形状の境界を示す関数を参照領域表現関数として用いる誤差最適化処理の結果を示す。切り欠きは、視野外境界の一例である。平行四辺形の下側の頂点付近に撮影装置902の視野に応じた切り欠きが存在する形状は、例えば画像903に示す形状である。 Therefore, in FIG. 7, "the one considering the boundary of the field of view" is a function indicating the boundary of the shape in which a notch corresponding to the field of view of the photographing apparatus 902 exists near the lower vertex of the approximate shape such as a parallelogram. The result of the error optimization processing used as the reference area representation function is shown. The notch is an example of an out-of-field boundary. The shape in which the notch corresponding to the field of view of the photographing apparatus 902 exists near the lower apex of the parallelogram is, for example, the shape shown in the image 903.
 図8は、実施形態における視野外境界の一例を示す図である。図8において、横軸の値が0であって縦軸の値が0の位置は、VLS平面における撮影装置902の位置である。図8は、VLS平面において撮影装置902の視野の境界の一例を示す。図8は、VLS平面において、参照領域表現関数が表す形状の一例を示す。図8は、参照領域表現関数が表す形状の一例として、切り欠きを有する平行四辺形を示す。図8において一点鎖線で表される境界は、視野外境界の一例である。なお、図8はVLS平面を表す。図8の横軸はVLS平面における撮影装置902の位置からの距離を表す。図8の縦軸はVLS平面における撮影装置902の位置からの距離であって図8の横軸に直交する方向の距離を表す。 FIG. 8 is a diagram showing an example of an out-of-field boundary in the embodiment. In FIG. 8, the position where the value on the horizontal axis is 0 and the value on the vertical axis is 0 is the position of the photographing apparatus 902 on the VLS plane. FIG. 8 shows an example of the boundary of the field of view of the photographing apparatus 902 in the VLS plane. FIG. 8 shows an example of the shape represented by the reference region representation function in the VLS plane. FIG. 8 shows a parallelogram having a notch as an example of the shape represented by the reference region representation function. The boundary represented by the alternate long and short dash line in FIG. 8 is an example of an out-of-field boundary. Note that FIG. 8 shows the VLS plane. The horizontal axis of FIG. 8 represents the distance from the position of the photographing apparatus 902 on the VLS plane. The vertical axis of FIG. 8 is the distance from the position of the photographing apparatus 902 on the VLS plane and represents the distance in the direction orthogonal to the horizontal axis of FIG.
 なお、近似形状の一例として平行四辺形があることを述べたが、近似形状は平行四辺形に限らない。すなわち、参照領域表現関数が表す形状は平行四辺形や切り欠きを有する平行四辺形に限らない。参照領域表現関数が表す形状の他の具体例については、説明のわかりやすさを考慮して後述する。 Although it was mentioned that there is a parallelogram as an example of the approximate shape, the approximate shape is not limited to the parallelogram. That is, the shape represented by the reference area representation function is not limited to a parallelogram or a parallelogram having a notch. Other specific examples of the shape represented by the reference area representation function will be described later in consideration of the ease of explanation.
 なお、ここまでの抽出処理の説明では、領域が一本道であり分岐が無い場合を例に説明を行った。しかしながら、道が分岐していても抽出処理により道の方向を示す情報は取得可能である。すなわち、ここまでで説明した抽出処理は、道が分岐する場合にも適用可能であり、抽出処理によって分岐する各道の方向を示す情報が取得される。 In the explanation of the extraction process so far, the case where the area is a straight road and there is no branch is explained as an example. However, even if the road is branched, information indicating the direction of the road can be obtained by the extraction process. That is, the extraction process described so far can be applied even when the road branches, and information indicating the direction of each branch road is acquired by the extraction process.
 抽出処理によって分岐の数を取得する処理は、例えば以下の処理である。まず、K個(Kは1以上の整数)の参照領域表現関数を用いて表現される1つの関数を用いた誤差最小化処理を各Kについて実行する。得られた誤差最小化処理の結果のうち最も誤差が小さいKの値を参照領域の数(すなわち分岐の数)として取得する。式(7)中のMは推定に用いるパラメータの数を示しており、Mは参照領域の数(K)と参照領域表現関数のパラメータの数を乗じたものである。Kの値を取得する処理は目的情報取得処理の一例である。Kが2以上という結果が抽出処理により得られた場合、自律移動体9の位置は交差点の位置である。このようにして、抽出処理により自律移動体9の位置が交差点か否かを示す情報が取得される。 The process of acquiring the number of branches by the extraction process is, for example, the following process. First, an error minimization process using one function expressed by using K reference area expression functions (K is an integer of 1 or more) is executed for each K. Among the obtained error minimization processing results, the value of K having the smallest error is acquired as the number of reference regions (that is, the number of branches). M in the equation (7) indicates the number of parameters used for estimation, and M is the product of the number of reference regions (K) and the number of parameters of the reference region expression function. The process of acquiring the value of K is an example of the object information acquisition process. When the result that K is 2 or more is obtained by the extraction process, the position of the autonomous moving body 9 is the position of the intersection. In this way, the extraction process acquires information indicating whether or not the position of the autonomous mobile body 9 is an intersection.
 なお、抽出処理によって分岐の数を取得する処理における誤差の具体例は、例えば以下の式(7)で表されるベイズ情報量規準BIC(Bayesian Information Criterion)である。 A specific example of the error in the process of acquiring the number of branches by the extraction process is, for example, the Bayesian Information Criterion (BIC) represented by the following equation (7).
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 式(7)においてLは尤度を表し、Nは領域境界距離の観測を行ったサンプル数を表す。そのため、Nは、例えば図6における領域境界距離情報を示す点の数である。尤度は、例えば領域境界距離と最適化の結果との二乗誤差の合計値であるSSEである。 In equation (7), L represents the likelihood and N represents the number of samples for which the region boundary distance was observed. Therefore, N is, for example, the number of points indicating the area boundary distance information in FIG. Likelihood is, for example, SSE, which is the total value of the root-mean-squared error between the region boundary distance and the result of optimization.
 なお、複数の参照領域表現関数を用いて表現される1つの関数(以下「複合関数」という。)を抽出処理において用いる場合、複合関数が含む参照領域表現関数のそれぞれは必ずしも同一の参照領域表現関数である必要は無く、少なくとも1つが他の参照領域表現関数と異なってもよい。 When one function (hereinafter referred to as "composite function") expressed by using a plurality of reference area expression functions is used in the extraction process, each of the reference area expression functions included in the compound function is not necessarily the same reference area expression. It does not have to be a function, and at least one may differ from the other reference area representation functions.
 図9は、実施形態における、交差点がある場合の誤差最小処理の結果の一例を示す図である。図9の横軸は、視線角度を示す。図9の縦軸は、距離を表す。単位は距離の単位である。図9は、Kが1という条件の下で実行された誤差最小化処理の結果と、Kが2という条件の下で実行された誤差最小化処理の結果とを示す。また図9は、実際に道の端を測量した結果を、測量結果のデータ点として示す。図9のこれらの結果は、K=1の条件の下での誤差最小化処理の結果よりもK=2の条件の下での誤差最小化処理の結果の方が、より実際の状況に近い結果を得られることを示す。 FIG. 9 is a diagram showing an example of the result of error minimum processing when there is an intersection in the embodiment. The horizontal axis of FIG. 9 indicates the line-of-sight angle. The vertical axis of FIG. 9 represents the distance. The unit is the unit of distance. FIG. 9 shows the result of the error minimization process executed under the condition that K is 1, and the result of the error minimization process executed under the condition that K is 2. Further, FIG. 9 shows the result of actually surveying the end of the road as a data point of the survey result. These results in FIG. 9 are closer to the actual situation in the result of the error minimization processing under the condition of K = 2 than in the result of the error minimization processing under the condition of K = 1. Show that the result can be obtained.
 なお、図3のセグメンテーションの結果は、交差点の無い道の結果であったが、交差点が有る場合にもセグメンテーションは適用可能である。
 図10は、実施形態における交差点のセグメンテーションの結果の一例を示す図である。図10は、2つの道に分岐されることを示す。
The result of the segmentation in FIG. 3 was the result of a road without an intersection, but the segmentation can be applied even when there is an intersection.
FIG. 10 is a diagram showing an example of the result of intersection segmentation in the embodiment. FIG. 10 shows that it branches into two roads.
 図11は、実施形態における、交差点におけるセグメンテーションの結果を俯瞰図に射影した結果と、視野境界距離とを示す結果の一例の図である。図11の上側の図は、図9が示すセグメンテーションの結果を鳥瞰図上に表現した結果の一例を示す。図11の下側の図は、図9の上側の図から得られた領域境界距離情報を示す。 FIG. 11 is an example of a result showing a bird's-eye view of the result of segmentation at an intersection and a result showing a visual field boundary distance in the embodiment. The upper figure of FIG. 11 shows an example of the result of expressing the result of the segmentation shown in FIG. 9 on a bird's-eye view. The lower diagram of FIG. 11 shows the region boundary distance information obtained from the upper diagram of FIG.
 図12は、実施形態の自律移動体制御装置1の機能構成の一例を示す図である。自律移動体制御装置1は、バスで接続されたCPU(Central Processing Unit)等のプロセッサ91とメモリ92とを備える制御部10を備え、プログラムを実行する。自律移動体制御装置1は、プログラムの実行によって制御部10、入力部11、通信部12、記憶部13及び出力部14を備える装置として機能する。より具体的には、プロセッサ91が記憶部13に記憶されているプログラムを読み出し、読み出したプログラムをメモリ92に記憶させる。プロセッサ91が、メモリ92に記憶させたプログラムを実行することによって、自律移動体制御装置1は、制御部10、入力部11、通信部12、記憶部13及び出力部14を備える装置として機能する。 FIG. 12 is a diagram showing an example of the functional configuration of the autonomous mobile control device 1 of the embodiment. The autonomous mobile control device 1 includes a control unit 10 including a processor 91 such as a CPU (Central Processing Unit) connected by a bus and a memory 92, and executes a program. The autonomous mobile control device 1 functions as a device including a control unit 10, an input unit 11, a communication unit 12, a storage unit 13, and an output unit 14 by executing a program. More specifically, the processor 91 reads out the program stored in the storage unit 13, and stores the read program in the memory 92. By executing the program stored in the memory 92 by the processor 91, the autonomous mobile control device 1 functions as a device including a control unit 10, an input unit 11, a communication unit 12, a storage unit 13, and an output unit 14. ..
 制御部10は、例えば抽出処理を実行する。制御部10は、例えば自律移動体制御装置1が備える各種機能部の動作と自律移動体9の動作とを制御する。制御部10は、例えば通信部12の動作を制御し通信部12を介して処理対象画像を取得する。制御部10は、例えば取得した処理対象画像に基づいて領域境界距離情報を取得する。制御部10は、撮影装置902が3DLiDARのような領域境界距離情報を取得可能な装置の場合には、処理対象画像に代えて領域境界距離情報を取得してもよい。 The control unit 10 executes, for example, an extraction process. The control unit 10 controls, for example, the operation of various functional units included in the autonomous mobile body control device 1 and the operation of the autonomous mobile body 9. The control unit 10 controls the operation of the communication unit 12, for example, and acquires the image to be processed via the communication unit 12. The control unit 10 acquires area boundary distance information based on, for example, the acquired image to be processed. When the photographing device 902 is a device such as 3DLiDAR that can acquire the area boundary distance information, the control unit 10 may acquire the area boundary distance information instead of the image to be processed.
 制御部10は、例えば抽出処理を実行する。制御部10は、例えば通信部12を介して自律移動体9の動作を制御する。制御部10は、例えば通信部12を介して自律移動体9の位置及び向きを示す情報(以下「進行状態情報」という。)を取得してもよい。制御部10は、例えば自律移動体9の動作の制御の履歴に基づき、自律移動体9の位置及び向きを推定してもよい。 The control unit 10 executes, for example, an extraction process. The control unit 10 controls the operation of the autonomous mobile body 9 via, for example, the communication unit 12. The control unit 10 may acquire information indicating the position and orientation of the autonomous mobile body 9 (hereinafter referred to as “progress state information”) via, for example, the communication unit 12. The control unit 10 may estimate the position and orientation of the autonomous mobile body 9 based on, for example, the history of control of the operation of the autonomous mobile body 9.
 入力部11は、マウスやキーボード、タッチパネル等の入力装置を含んで構成される。入力部11は、これらの入力装置を自装置に接続するインタフェースとして構成されてもよい。入力部11は、自装置に対する各種情報の入力を受け付ける。 The input unit 11 includes an input device such as a mouse, a keyboard, and a touch panel. The input unit 11 may be configured as an interface for connecting these input devices to its own device. The input unit 11 receives input of various information to its own device.
 通信部12は、自装置を外部装置に接続するための通信インタフェースを含んで構成される。通信部12は、有線又は無線を介して自律移動体9と通信する。通信部12は、自律移動体9との通信により、例えば自律移動体9の進行状態情報を受信する。通信部12は、自律移動体9との通信により自律移動体9を制御する制御信号を自律移動体9に送信する。 The communication unit 12 includes a communication interface for connecting the own device to an external device. The communication unit 12 communicates with the autonomous mobile body 9 via wire or wireless. The communication unit 12 receives, for example, progress information of the autonomous mobile body 9 by communicating with the autonomous mobile body 9. The communication unit 12 transmits a control signal for controlling the autonomous mobile body 9 to the autonomous mobile body 9 by communicating with the autonomous mobile body 9.
 通信部12は、有線又は無線を介して、処理対象画像の送信元と通信する。通信部12は、処理対象画像の送信元との通信によって処理対象画像を取得する。処理対象画像の送信元は、自律移動体9自身であってもよいし、自律移動体9とともに移動するドローン等の他の装置であってもよい。 The communication unit 12 communicates with the source of the image to be processed via wired or wireless. The communication unit 12 acquires the image to be processed by communicating with the source of the image to be processed. The source of the image to be processed may be the autonomous moving body 9 itself, or may be another device such as a drone that moves together with the autonomous moving body 9.
 記憶部13は、磁気ハードディスク装置や半導体記憶装置などの非一時的コンピュータ読み出し可能な記憶媒体装置を用いて構成される。記憶部13は自律移動体制御装置1に関する各種情報を記憶する。記憶部13は、例えば制御部10による自律移動体9の制御の履歴を記憶する。記憶部13は、例えば進行状態情報の履歴を記憶する。記憶部13は、予め参照領域情報を記憶する。記憶部13は、予め距離画像を記憶する。 The storage unit 13 is configured by using a non-temporary computer-readable storage medium device such as a magnetic hard disk device or a semiconductor storage device. The storage unit 13 stores various information about the autonomous mobile control device 1. The storage unit 13 stores, for example, the history of control of the autonomous mobile body 9 by the control unit 10. The storage unit 13 stores, for example, a history of progress information. The storage unit 13 stores the reference area information in advance. The storage unit 13 stores a distance image in advance.
 出力部14は、各種情報を出力する。出力部14は、例えばCRT(Cathode Ray Tube)ディスプレイや液晶ディスプレイ、有機EL(Electro-Luminescence)ディスプレイ等の表示装置を含んで構成される。出力部14は、これらの表示装置を自装置に接続するインタフェースとして構成されてもよい。出力部14は、例えば入力部11又は通信部12に入力された情報を出力する。出力部14は、例えば制御部10による抽出処理の実行結果を出力する。 The output unit 14 outputs various information. The output unit 14 includes display devices such as a CRT (Cathode Ray Tube) display, a liquid crystal display, and an organic EL (Electro-Luminescence) display. The output unit 14 may be configured as an interface for connecting these display devices to its own device. The output unit 14 outputs, for example, the information input to the input unit 11 or the communication unit 12. The output unit 14 outputs, for example, the execution result of the extraction process by the control unit 10.
 図13は、実施形態における制御部10の機能構成の一例を示す図である。制御部10は、進行状態情報取得部101、領域境界距離情報取得部102、参照領域情報取得部103、目的情報取得部104及び制御信号生成部105を備える。 FIG. 13 is a diagram showing an example of the functional configuration of the control unit 10 in the embodiment. The control unit 10 includes a progress state information acquisition unit 101, a region boundary distance information acquisition unit 102, a reference area information acquisition unit 103, a target information acquisition unit 104, and a control signal generation unit 105.
 進行状態情報取得部101は、自律移動体9の進行状態情報を取得する。進行状態情報取得部101は、進行状態情報を、自律移動体9の動作の制御の履歴から算出することで取得してもよいし、通信部12を介して自律移動体9から取得してもよい。 The progress status information acquisition unit 101 acquires the progress status information of the autonomous mobile body 9. The progress state information acquisition unit 101 may acquire the progress state information by calculating from the history of control of the operation of the autonomous mobile body 9, or may acquire the progress state information from the autonomous mobile body 9 via the communication unit 12. good.
 領域境界距離情報取得部102は、領域境界距離情報を取得する。領域境界距離情報は、撮影装置902が3DLiDAR等の領域境界距離情報を取得可能な装置の場合には、通信部12を介して撮影装置902等の領域境界距離情報の送信元から領域境界距離情報を取得する。領域境界距離情報取得部102は、撮影装置902が単眼カメラ等の処理対象画像を取得する装置の場合には通信部12を介して処理対象画像を取得し、取得した処理対象画像に対してVLS処理を実行することで領域境界距離情報を取得する。 The area boundary distance information acquisition unit 102 acquires the area boundary distance information. The area boundary distance information is the area boundary distance information from the source of the area boundary distance information of the photographing device 902 or the like via the communication unit 12 when the photographing device 902 is a device capable of acquiring the area boundary distance information such as 3DLiDAR. To get. When the photographing device 902 is a device for acquiring a processing target image such as a monocular camera, the region boundary distance information acquisition unit 102 acquires the processing target image via the communication unit 12, and the VLS for the acquired processing target image. The area boundary distance information is acquired by executing the process.
 参照領域情報取得部103は、記憶部13に記憶された参照領域情報を取得する。より具体的には、参照領域情報取得部103が記憶部13に記憶された1又は複数の参照領域表現関数を読み出す。目的情報取得部104は、抽出処理を実行し目的情報を取得する。 The reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13. More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13. The target information acquisition unit 104 executes the extraction process and acquires the target information.
 制御信号生成部105は、目的情報に基づき自律移動体9の動作を制御する制御信号を生成する。制御信号生成部105は、生成した制御信号を、通信部12を介して自律移動体9に送信する。 The control signal generation unit 105 generates a control signal that controls the operation of the autonomous mobile body 9 based on the target information. The control signal generation unit 105 transmits the generated control signal to the autonomous mobile body 9 via the communication unit 12.
 図14は、実施形態の自律移動体制御装置1が実行する処理の流れの一例を示す図である。図14の処理は、所定のタイミングで繰り返し実行される。 FIG. 14 is a diagram showing an example of a flow of processing executed by the autonomous mobile control device 1 of the embodiment. The process of FIG. 14 is repeatedly executed at a predetermined timing.
 進行状態情報取得部101が進行状態情報を取得する(ステップS101)。次に領域境界距離情報取得部102が領域境界距離情報を取得する(ステップS102)。領域境界距離情報取得部102は、通信部12を介して領域境界距離情報の送信元から領域境界距離情報を取得してもよいし、通信部12を介して処理対象画像を取得し、取得した処理対象画像に対するVLS処理の実行により領域境界距離情報を取得してもよい。 Progress status information acquisition unit 101 acquires progress status information (step S101). Next, the area boundary distance information acquisition unit 102 acquires the area boundary distance information (step S102). The area boundary distance information acquisition unit 102 may acquire the area boundary distance information from the source of the area boundary distance information via the communication unit 12, or acquires and acquires the processing target image via the communication unit 12. Area boundary distance information may be acquired by executing VLS processing on the image to be processed.
 次に参照領域情報取得部103が記憶部13に記憶された参照領域情報を取得する。より具体的には、参照領域情報取得部103が記憶部13に記憶された1又は複数の参照領域表現関数を読み出す(ステップS103)。次に目的情報取得部104が、参照領域情報と領域境界距離情報とを用いた抽出処理を実行し目的情報を取得する(ステップS104)。次に制御信号生成部105が目的情報に基づき自律移動体9の動作を制御する制御信号を生成し、生成した制御信号によって自律移動体9の動作を制御する(ステップS105)。 Next, the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13. More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13 (step S103). Next, the target information acquisition unit 104 executes an extraction process using the reference area information and the area boundary distance information to acquire the target information (step S104). Next, the control signal generation unit 105 generates a control signal for controlling the operation of the autonomous mobile body 9 based on the target information, and controls the operation of the autonomous mobile body 9 by the generated control signal (step S105).
 なお、ステップS101の処理は、ステップS105の処理の実行前であればどのタイミングで実行されてもよい。また、ステップS102及びステップS103は必ずしもこの順に実行される必要は無く、ステップS104の実行前に実行されればどのような順で実行されてもよい。 Note that the process of step S101 may be executed at any timing before the process of step S105 is executed. Further, steps S102 and S103 do not necessarily have to be executed in this order, and may be executed in any order as long as they are executed before the execution of step S104.
 このように、自律移動体制御装置1は、領域境界距離情報を取得するステップを実行する。また、自律移動体制御装置1は、参照領域情報と領域境界距離情報とに基づき、自律移動体9と対象領域との関係を示す情報である目的情報を取得するステップを実行する。 In this way, the autonomous mobile control device 1 executes the step of acquiring the area boundary distance information. Further, the autonomous mobile body control device 1 executes a step of acquiring target information which is information indicating the relationship between the autonomous mobile body 9 and the target area based on the reference area information and the area boundary distance information.
 図15は、実施形態における領域境界距離情報取得部102が領域境界距離情報を取得する処理の流れの一例を示すフローチャートである。より具体的には、図15は、撮影装置902が単眼カメラである場合に領域境界距離情報取得部102が領域境界距離情報を取得する処理の流れの一例を示すフローチャートである。 FIG. 15 is a flowchart showing an example of the flow of the process in which the area boundary distance information acquisition unit 102 in the embodiment acquires the area boundary distance information. More specifically, FIG. 15 is a flowchart showing an example of a flow of processing in which the area boundary distance information acquisition unit 102 acquires the area boundary distance information when the photographing device 902 is a monocular camera.
 領域境界距離情報取得部102は、通信部12を介して処理対象画像を取得する(ステップS201)。次に領域境界距離情報取得部102は領域分割処理を実行する(ステップS202)。次に領域境界距離情報取得部102は境界画素情報取得処理を実行する(ステップS203)。次に領域境界距離情報取得部102は距離対応付け処理を実行する(ステップS204)。次に領域境界距離情報取得部102は仮想空間内距離測定処理を実行する(ステップS205)。なお、ステップS203とステップS204の処理とはステップS202の実行後であってステップS205の実行前に実行されれば、どのようなタイミングで実行されてもよい。そのため、例えばステップS202の次にステップS204が実行され、その次にステップS203が実行されてもよい。また、例えばステップS203とステップS204とは同じタイミングに実行されてもよい。 The area boundary distance information acquisition unit 102 acquires the image to be processed via the communication unit 12 (step S201). Next, the area boundary distance information acquisition unit 102 executes the area division process (step S202). Next, the area boundary distance information acquisition unit 102 executes the boundary pixel information acquisition process (step S203). Next, the area boundary distance information acquisition unit 102 executes the distance mapping process (step S204). Next, the area boundary distance information acquisition unit 102 executes the distance measurement process in the virtual space (step S205). If the processes of step S203 and step S204 are executed after the execution of step S202 and before the execution of step S205, they may be executed at any timing. Therefore, for example, step S204 may be executed after step S202, and then step S203 may be executed. Further, for example, step S203 and step S204 may be executed at the same timing.
 なお、領域境界距離情報取得部102が領域境界距離情報を取得する処理は、通信部12を介して領域境界距離情報が外部装置から入力される場合には、通信部12に入力された領域境界距離情報を領域境界距離情報取得部102が取得する処理である。 In the process of acquiring the area boundary distance information by the area boundary distance information acquisition unit 102, when the area boundary distance information is input from the external device via the communication unit 12, the area boundary input to the communication unit 12 is performed. This is a process in which the area boundary distance information acquisition unit 102 acquires the distance information.
(撮影装置902と水平面との関係の一例の説明)
 ここで、撮影装置902と水平面との関係の一例を説明する。
 図16は、実施形態における自律移動体9の移動体本体905と撮影装置902と水平面との関係の一例を説明する説明図である。移動体本体905は、自律移動体9を移動させる車輪と可動部と移動を制御する制御部とを備える。自律移動体9は、移動体本体905と撮影装置902とを備える。撮影装置902は移動体本体905の上部に位置し、移動体本体905が位置する水平面から高さhの位置に位置する。図16においてチルト角度は鉛直下向きとカメラの光軸をなす角(水平面との角)を意味する。図16において、破線は視界の端を意味する。
(Explanation of an example of the relationship between the photographing apparatus 902 and the horizontal plane)
Here, an example of the relationship between the photographing apparatus 902 and the horizontal plane will be described.
FIG. 16 is an explanatory diagram illustrating an example of the relationship between the moving body main body 905 of the autonomous moving body 9, the photographing device 902, and the horizontal plane in the embodiment. The mobile body 905 includes wheels for moving the autonomous mobile body 9, a movable portion, and a control unit for controlling the movement. The autonomous mobile body 9 includes a mobile body main body 905 and a photographing device 902. The photographing apparatus 902 is located above the moving body main body 905, and is located at a height h from the horizontal plane on which the moving body main body 905 is located. In FIG. 16, the tilt angle means the angle (angle with the horizontal plane) forming the optical axis of the camera with the vertical downward direction. In FIG. 16, the dashed line means the edge of the field of view.
(参照領域表現関数の具体例)
 以下、撮影装置902が単眼カメラである場合を例にVLS平面において原点から境界までの距離を表す式(すなわち参照領域表現関数)の具体例を説明する。そこで以下、カメラは、撮影装置902を意味する。また以下、対象領域が道であり、撮影装置902が単眼カメラである場合を例に参照領域表現関数の具体例の説明を行う。
(Specific example of reference area representation function)
Hereinafter, a specific example of an equation (that is, a reference area expression function) expressing the distance from the origin to the boundary in the VLS plane will be described by taking the case where the photographing apparatus 902 is a monocular camera as an example. Therefore, hereinafter, the camera means a photographing device 902. Further, a specific example of the reference area expression function will be described below by taking the case where the target area is a road and the photographing device 902 is a monocular camera as an example.
 まず、処理対象画像に基づいて生成されるVLS平面におけるパラメータを示す。これらのパラメータはパラメータの一例である。また以降、参照領域表現関数の具体例の説明において登場するパラメータはパラメータの一例である。 First, the parameters in the VLS plane generated based on the image to be processed are shown. These parameters are examples of parameters. In the following, the parameters appearing in the description of the specific example of the reference area representation function are examples of parameters.
 図17は、実施形態における距離を表す式に用いられるパラメータの一例を示す図である。図17においてy軸はカメラの正面方向を表す。x軸はVLS平面内のy軸に垂直な方向である。VLS平面におけるカメラの視野の境界の形状はカメラの位置を底辺として処理対象画像からVLS平面に反映する距離m(map_height)を高さとする台形の形状をしている。左右の辺の傾きは、カメラの内部パラメータにより設定する。Virtual Lidarの中心は、このVLS平面において、y方向の位置が原点からの距離y0の位置に位置し、x方向の位置が原点からの距離x0の位置に位置する。なお、Virtual LidarはVLS平面におけるVirtual Lidarの信号の発信源である。 FIG. 17 is a diagram showing an example of parameters used in the formula representing the distance in the embodiment. In FIG. 17, the y-axis represents the front direction of the camera. The x-axis is the direction perpendicular to the y-axis in the VLS plane. The shape of the boundary of the field of view of the camera on the VLS plane is a trapezoidal shape having the position of the camera as the base and the distance m (map_height) reflected on the VLS plane from the image to be processed as the height. The tilt of the left and right sides is set by the internal parameters of the camera. The center of the virtual lidar is located in the VLS plane at a position where the position in the y direction is a distance y0 from the origin and a position in the x direction is a position where the distance x0 is from the origin. Note that the virtual lidar is a source of the signal of the virtual lidar in the VLS plane.
 以下、y軸上にVirtual Lidarの中心が存在する場合(すなわちx0=0の場合)について、VLS平面におけるVirtual Lidarの中心から境界までの距離を表す式の具体例を説明する。以下、 Hereinafter, when the center of the Virtual Lidar exists on the y-axis (that is, when x0 = 0), a specific example of the formula expressing the distance from the center of the Virtual Lidar in the VLS plane to the boundary will be described. Less than,
 (第1の具体例)
 第1の具体例は、道の形状が直線である場合のVLS平面におけるVirtual Lidarの中心から境界までの距離を表す式(以下「第1距離式」という。)の具体例である。Virtual Lidarの中心がVLS平面の原点に位置する場合の第1距離式が参照領域表現関数の一例である。
(First specific example)
The first specific example is a specific example of an equation (hereinafter referred to as "first distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a straight line. The first distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
 道の形状は、道の傾きθangle、道の長さθlength、道の左の幅θl.width、道の右の幅θr.widthの各パラメータを用いて定式化される。すなわち、道の形状は上述した式(3)で表現される。 The shape of the road is the slope of the road θangle, the length of the road θlength, and the width on the left side of the road θl. width, right width of the road θr. It is formulated using each parameter of width. That is, the shape of the road is expressed by the above-mentioned equation (3).
 図18は、実施形態における形状が直線の道の形状を定式化するために用いられるパラメータを説明する説明図である。道の長さは、カメラの中心位置camerapositionから、y軸方向の道の端までの距離である。道の幅についてはVirtual Lidarの右側の道幅と左側の道幅とに分けてパラメータが設定されており、道における自律移動体9の位置の推定が可能であるようにパラメータが設定されている。 FIG. 18 is an explanatory diagram illustrating parameters used for formulating the shape of a road whose shape is straight in the embodiment. The length of the road is the distance from the camera position of the center of the camera to the end of the road in the y-axis direction. Regarding the width of the road, parameters are set separately for the road width on the right side and the road width on the left side of the virtual lidar, and the parameters are set so that the position of the autonomous moving body 9 on the road can be estimated.
 直線の道の形状を表す式は例えば以下の式(8)で表される。 The formula expressing the shape of a straight road is, for example, the following formula (8).
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 式(8)を用いて、第1距離式は定式化される。導出のための考え方は以下の通りである。すなわち、Virtual Lidarの中心から道の端に向かって信号が発信された場面を想定し、その後、発信された信号が道の端線に達するまでの過程を想定することで、Virtual Lidarの中心から信号と道の端線との交点までの距離を定式化する。 The first distance equation is formulated using equation (8). The idea for derivation is as follows. That is, by assuming a scene in which a signal is transmitted from the center of the virtual lidar toward the end of the road, and then assuming a process until the transmitted signal reaches the end of the road, from the center of the virtual lidar. Formulate the distance to the intersection of the signal and the end of the road.
 図19は、実施形態における道の形状が直線である場合にVirtual Lidarの中心から発信される信号の伝搬の様子の一例を示す図。図19は、Virtual Lidarの信号による測定が行われる順序の一例を示す。具体的には、図19は、-y軸方向(すなわち、y軸方向の負の方向)を0度として時計周りに、等間隔で360度行われる。間隔は任意である。図19の点P1、点P2、点P3、及び点P4は、それぞれ近似形状の頂点を示す。 FIG. 19 is a diagram showing an example of the propagation of a signal transmitted from the center of the virtual lidar when the shape of the road in the embodiment is a straight line. FIG. 19 shows an example of the order in which the measurement by the signal of Virtual Lidar is performed. Specifically, FIG. 19 is performed clockwise at equal intervals of 360 degrees with the −y axis direction (that is, the negative direction in the y-axis direction) as 0 degree. The interval is arbitrary. Point P1, point P2, point P3, and point P4 in FIG. 19 indicate vertices of an approximate shape, respectively.
 以下の式(9)~式(20)の集合が第1距離式の一例である。角度th1、角度th2、角度th3及び角度th4それぞれは、図19の点P1、点P2、点P3、及び点P4の各点とVirtual Lidarの中心とを結んだ線と-y軸がなす角である。角度th及びangleの単位はラジアンである。 The set of the following equations (9) to (20) is an example of the first distance equation. Angle th1, angle th2, angle th3, and angle th4 are the angles formed by the -y axis and the line connecting the points P1, P2, P3, and P4 in FIG. 19 and the center of the virtual lidar. be. The unit of angle th and angle is radian.
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000016
Figure JPOXMLDOC01-appb-M000016
Figure JPOXMLDOC01-appb-M000017
Figure JPOXMLDOC01-appb-M000017
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000019
Figure JPOXMLDOC01-appb-M000019
Figure JPOXMLDOC01-appb-M000020
Figure JPOXMLDOC01-appb-M000020
 (第2の具体例)
 第2の具体例は、道の形状がカーブである場合のVLS平面におけるVirtual Lidarの中心から境界までの距離を表す式(以下「第2距離式」という。)の具体例である。Virtual Lidarの中心がVLS平面の原点に位置する場合の第2距離式が参照領域表現関数の一例である。
(Second specific example)
The second specific example is a specific example of an equation (hereinafter referred to as "second distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a curve. The second distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
 道の形状は、カーブまでの直線の傾きθangle、カーブの入り口までの道の長さθD1、左の道までの距離θl.width、右の道までの距離θr.width、カーブの先の道幅θwidth2のパラメータを用いて定式化される。抽出処理ではこれらのパラメータの値が推定される。カーブは、楕円形状として定式化される。楕円の横幅を手前の道幅θwidth、楕円の縦幅をカーブの先の道幅θwidth2も用いて定式化される。すなわち、道の形状は以下の式(21)で表現される。 The shape of the road is the slope of the straight line to the curve θangle, the length of the road to the entrance of the curve θD1, and the distance to the left road θl. width, distance to the right road θr. It is formulated using the parameters of width and road width θwise2 at the end of the curve. The extraction process estimates the values of these parameters. The curve is formulated as an elliptical shape. The width of the ellipse is formulated using the road width θwidth in front, and the vertical width of the ellipse is formulated using the road width θwidth 2 at the end of the curve. That is, the shape of the road is expressed by the following equation (21).
Figure JPOXMLDOC01-appb-M000021
Figure JPOXMLDOC01-appb-M000021
 図20は、実施形態における形状がカーブの道の形状を定式化するために用いられるパラメータを説明する説明図である。抽出処理においては、カーブを表すモデルの傾き(angle)がある場合はVitualLidarの中心を原点として形状全体を回転させる処理が実行されてもよい。 FIG. 20 is an explanatory diagram illustrating parameters used for formulating the shape of a road whose shape is a curve in the embodiment. In the extraction process, if there is an angle of the model representing the curve, a process of rotating the entire shape with the center of the VisualLidar as the origin may be executed.
 カーブを表現する数式の一例は以下の式(22)及び式(23)である。式(22)は右カーブを表し、式(23)は左カーブを表す。 Examples of mathematical formulas expressing curves are the following formulas (22) and (23). Equation (22) represents a right curve, and equation (23) represents a left curve.
Figure JPOXMLDOC01-appb-M000022
Figure JPOXMLDOC01-appb-M000022
Figure JPOXMLDOC01-appb-M000023
Figure JPOXMLDOC01-appb-M000023
 これらの数式を用いて、Virtual Lidarの中心から道の端までの距離の定式化が行われる。 Using these formulas, the distance from the center of Visual Lidar to the edge of the road is formulated.
 図21は、実施形態における第2距離式の定式化のために用いられる補助点の一例を示す第1の図である。図21は、左カーブの形状を表す。図21の点P1、点P2、点P3は、第2距離式の定式化のために用いられる補助点である。角度th1、角度th2、角度th3それぞれは、点P1~点P3それぞれとVirtual Lidarの中心とを結んだ線と-y軸がなす角を表す。 FIG. 21 is a first diagram showing an example of auxiliary points used for formulating the second distance equation in the embodiment. FIG. 21 shows the shape of the left curve. The points P1, P2, and P3 in FIG. 21 are auxiliary points used for formulating the second distance equation. Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
 図22は、実施形態における第2距離式の定式化のために用いられる補助点の一例を示す第2の図である。図22は、右カーブの形状を表す。図22の点P1、点P2、点P3は、第2距離式の定式化のために用いられる補助点である。角度th1、角度th2、角度th3それぞれは、点P1~点P3それぞれとVirtual Lidarの中心とを結んだ線と-y軸がなす角を表す。 FIG. 22 is a second diagram showing an example of auxiliary points used for formulating the second distance equation in the embodiment. FIG. 22 shows the shape of the right curve. The points P1, P2, and P3 in FIG. 22 are auxiliary points used for formulating the second distance equation. Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
 第2距離式の定式化では、角度th1、角度th2、角度th3の前後で、モデル(形状を表す数式の集合)の切り替えをおこない、直線、楕円の1つめの直線と直行する直線の3つの形状を表す数式を用いてカーブが表現される。 In the formulation of the second distance equation, the model (a set of mathematical formulas representing the shape) is switched before and after the angle th1, the angle th2, and the angle th3, and the straight line, the first straight line of the ellipse, and the orthogonal straight line are three. The curve is expressed using a mathematical formula that expresses the shape.
 以下の式(24)~式(40)の集合が第2距離式の一例である。角度th及びangleの単位はラジアンである。 The set of the following equations (24) to (40) is an example of the second distance equation. The unit of angle th and angle is radian.
Figure JPOXMLDOC01-appb-M000024
Figure JPOXMLDOC01-appb-M000024
Figure JPOXMLDOC01-appb-M000025
Figure JPOXMLDOC01-appb-M000025
Figure JPOXMLDOC01-appb-M000026
Figure JPOXMLDOC01-appb-M000026
Figure JPOXMLDOC01-appb-M000027
Figure JPOXMLDOC01-appb-M000027
Figure JPOXMLDOC01-appb-M000028
Figure JPOXMLDOC01-appb-M000028
Figure JPOXMLDOC01-appb-M000029
Figure JPOXMLDOC01-appb-M000029
Figure JPOXMLDOC01-appb-M000030
Figure JPOXMLDOC01-appb-M000030
Figure JPOXMLDOC01-appb-M000031
Figure JPOXMLDOC01-appb-M000031
Figure JPOXMLDOC01-appb-M000032
Figure JPOXMLDOC01-appb-M000032
Figure JPOXMLDOC01-appb-M000033
Figure JPOXMLDOC01-appb-M000033
Figure JPOXMLDOC01-appb-M000034
Figure JPOXMLDOC01-appb-M000034
Figure JPOXMLDOC01-appb-M000035
Figure JPOXMLDOC01-appb-M000035
Figure JPOXMLDOC01-appb-M000036
Figure JPOXMLDOC01-appb-M000036
Figure JPOXMLDOC01-appb-M000037
Figure JPOXMLDOC01-appb-M000037
Figure JPOXMLDOC01-appb-M000038
Figure JPOXMLDOC01-appb-M000038
Figure JPOXMLDOC01-appb-M000039
Figure JPOXMLDOC01-appb-M000039
Figure JPOXMLDOC01-appb-M000040
Figure JPOXMLDOC01-appb-M000040
 式(36)はカーブの先の道が、1つ目の道と直行している場合に成立する式である。また、楕円の中心をxc,ycとして、Virtual Lidarの信号と楕円との交点をx_e及びy_eと定義した場合に、x_eth及びy_ethは式(37)の連立方程式から得られる。その結果、式(38)の左辺の値が取得される。式(39)及び式(40)は、angle≠0の場合に、抽出処理において実行される演算を表す。より具体的には、全てのVLScurve.th,VLSsecond_road.th,の算出後に実行される演算を表す。 Equation (36) is an equation that holds when the road ahead of the curve goes straight to the first road. Further, when the center of the ellipse is xc and yc and the intersection of the virtual lidar signal and the ellipse is defined as x_e and y_e, x_eth and y_eth can be obtained from the simultaneous equations of the equation (37). As a result, the value on the left side of the equation (38) is acquired. Equations (39) and (40) represent operations performed in the extraction process when angle ≠ 0. More specifically, all VLScurve. th, VLSsecond_road. Represents the operation executed after the calculation of th.
 (第3の具体例)
 第3の具体例は、道が交差点である場合のVLS平面におけるVirtual Lidarの中心から境界までの距離を表す式(以下「第3距離式」という。)の具体例である。Virtual Lidarの中心がVLS平面の原点に位置する場合の第3距離式が参照領域表現関数の一例である。
(Third specific example)
The third specific example is a specific example of an equation (hereinafter referred to as "third distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the road is an intersection. The third distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
 ここでは右ト字、左ト字、丁字路の3つの道の形状の定式化について説明する。ト字の道の形状は、カーブを表現するために用いられるパラメータに加えて、交差点までの距離θD1をパラメータとする式によって表現される。そのため、ト字の道の形状は、以下の式(41)で表される。 Here, we will explain the formulation of the shapes of the three roads, right-to-character, left-to-character, and junction. The shape of the T-shaped road is expressed by an equation whose parameter is the distance θD1 to the intersection, in addition to the parameters used to express the curve. Therefore, the shape of the T-shaped road is expressed by the following equation (41).
Figure JPOXMLDOC01-appb-M000041
Figure JPOXMLDOC01-appb-M000041
 図23は、実施形態におけるト字の道の形状の一例を示す図である。図23は、画面下から上に(すなわちy軸正方向に)向かう道であり、交差点を有する道であり、左に向かう道と上に向かう道とに分岐する道を示す。 FIG. 23 is a diagram showing an example of the shape of the T-shaped road in the embodiment. FIG. 23 shows a road that goes from the bottom of the screen to the top (that is, in the positive direction of the y-axis), has an intersection, and branches into a road that goes to the left and a road that goes up.
 以下の式(42)は右ト字の道の形状を表す。 The following formula (42) represents the shape of the right-shaped road.
Figure JPOXMLDOC01-appb-M000042
Figure JPOXMLDOC01-appb-M000042
 以下の式(43)は左ト字の道の形状を表す。 The following formula (43) represents the shape of the left T-shaped road.
Figure JPOXMLDOC01-appb-M000043
Figure JPOXMLDOC01-appb-M000043
 以下の式(44)は丁字路の道の形状を表す。 The following formula (44) represents the shape of the road at the junction.
Figure JPOXMLDOC01-appb-M000044
Figure JPOXMLDOC01-appb-M000044
 丁字路の場合の式(44)もト字路の場合の式と同様に交差点までの距離をθD1として導出されたものである。 The equation (44) in the case of the junction is also derived with the distance to the intersection as θD1 as in the equation in the case of the T-junction.
 これらの数式を用いて、Virtual Lidarの中心から道の端までの距離の定式化が行われる。 Using these formulas, the distance from the center of Visual Lidar to the edge of the road is formulated.
 図24は、実施形態における第3距離式の定式化のために用いられる補助点の一例を示す図である。図24は、左ト字の道の形状を表す。図24の点P1、点P2、点P3は、第3距離式の定式化のために用いられる補助点である。角度th1、角度th2、角度th3それぞれは、点P1~点P3それぞれとVirtual Lidarの中心とを結んだ線と-y軸がなす角を表す。 FIG. 24 is a diagram showing an example of auxiliary points used for formulating the third distance equation in the embodiment. FIG. 24 shows the shape of a left T-shaped road. The points P1, P2, and P3 in FIG. 24 are auxiliary points used for formulating the third distance equation. Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
 角度th1、角度th2、角度th3、角度th4の前後で、数式の切り替えが行われ、交差点の形状を表す式を、直線を表す式と1つめの直線と直行する直線を表す式との2つの式を用いて表す。 Formulas are switched before and after angle th1, angle th2, angle th3, and angle th4, and there are two formulas that represent the shape of the intersection: a straight line and a straight line that is orthogonal to the first straight line. Expressed using an equation.
 以下の式(45)は、交差点における第3距離式の一例である。 The following formula (45) is an example of the third distance formula at an intersection.
Figure JPOXMLDOC01-appb-M000045
Figure JPOXMLDOC01-appb-M000045
 以下の式(46)は、右ト字路における第3距離式の一例である。 The following equation (46) is an example of the third distance equation in a right-to-junction.
Figure JPOXMLDOC01-appb-M000046
Figure JPOXMLDOC01-appb-M000046
 以下の式(47)は、左ト字路における第3距離式の一例である。 The following formula (47) is an example of the third distance formula in the left T-junction.
Figure JPOXMLDOC01-appb-M000047
Figure JPOXMLDOC01-appb-M000047
 以下の式(48)は、丁字路における第3距離式の一例である。 The following formula (48) is an example of the third distance formula at the junction.
Figure JPOXMLDOC01-appb-M000048
Figure JPOXMLDOC01-appb-M000048
 なお、1つめの直線と直行する直線を表す式は以下の式(49)で表される。 The formula representing the straight line orthogonal to the first straight line is expressed by the following formula (49).
Figure JPOXMLDOC01-appb-M000049
Figure JPOXMLDOC01-appb-M000049
 道の形状がカーブである場合と同様に、angle≠0の場合には抽出処理において以下の式(50)の演算が実行される。より具体的には、angle≠0の場合には抽出処理において、全てのVLSinsec,thVLSsecond_road,th,の算出後に以下の式(50)が表す演算が実行される。 Similar to the case where the shape of the road is a curve, when angle ≠ 0, the following equation (50) is executed in the extraction process. More specifically, when angle ≠ 0, in the extraction process, the operation represented by the following equation (50) is executed after all VLSins, thVLSsecond_road, th, are calculated.
Figure JPOXMLDOC01-appb-M000050
Figure JPOXMLDOC01-appb-M000050
 ここで直線とカーブの分類の結果について説明する。直線とカーブの分類とは、具体的には、直線、右カーブ、左カーブの式を用いて、Virtual Lidarによる観測結果に対する道の分類を行う処理である。直線とカーブの分類とは、より具体的には、対象となる道が直線とカーブとのいずれであるかを判定する処理である。図25、図27に俯瞰画像とVLS平面におけるVirtual Lidarの中心から境界までの距離との一例を示す。図26、図28に、直線、右カーブ、左カーブの式で推定した結果を示す。 Here, the results of classification of straight lines and curves will be explained. The classification of a straight line and a curve is a process of classifying a road for an observation result by Visual Lidar, specifically, using an equation of a straight line, a right curve, and a left curve. The classification of a straight line and a curve is, more specifically, a process of determining whether the target road is a straight line or a curve. 25 and 27 show an example of the bird's-eye view image and the distance from the center of the virtual lidar in the VLS plane to the boundary. 26 and 28 show the results estimated by the equations of the straight line, the right curve, and the left curve.
 図25は、実施形態における分類の結果の一例を説明する第1の説明図である。図25は、道が直線的な道であることを示す。 FIG. 25 is a first explanatory diagram illustrating an example of the result of classification in the embodiment. FIG. 25 shows that the road is a straight road.
 図26は、実施形態における分類の結果の一例を説明する第2の説明図である。より具体的には図26は、図25に示す道に対する分類の結果を示す。図26において“straight”は直線の式による推定の結果を示す、“Right curve”は右カーブの式による推定の結果を示し、“Left curve”は左カーブの式による推定の結果を示すことを示す。図26は、直線の式が選択されており、直線の式を用いた場合に推定結果と観測結果との一致の度合も高いことを示す。そのため、図25の結果と合わせて図26は、道の形状が高い精度で推定されたことを示す。 FIG. 26 is a second explanatory diagram illustrating an example of the result of classification in the embodiment. More specifically, FIG. 26 shows the results of classification for the road shown in FIG. 25. In FIG. 26, “straight” indicates the result of estimation by the equation of the straight line, “Right curve” indicates the result of estimation by the equation of the right curve, and “Left curve” indicates the result of estimation by the equation of the left curve. show. FIG. 26 shows that the linear equation is selected, and the degree of agreement between the estimation result and the observation result is high when the linear equation is used. Therefore, FIG. 26, together with the result of FIG. 25, shows that the shape of the road was estimated with high accuracy.
 図27は、実施形態における分類の結果の一例を説明する第3の説明図である。図27は、道が右カーブであることを示す。 FIG. 27 is a third explanatory diagram illustrating an example of the result of classification in the embodiment. FIG. 27 shows that the road is a right curve.
 図28は、実施形態における分類の結果の一例を説明する第4の説明図である。より具体的には図28は、図27に示す道に対する分類の結果を示す。図28において“straight”は直線の式による推定の結果を示す、“Right curve”は右カーブの式による推定の結果を示し、“Left curve”は左カーブの式による推定の結果を示すことを示す。図28は、右カーブの式が選択されており、右カーブの式を用いた場合に推定結果と観測結果との一致の度合も高いことを示す。そのため、図27の結果と合わせて図28は、道の形状が高い精度で推定されたことを示す。 FIG. 28 is a fourth explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 28 shows the results of classification for the road shown in FIG. 27. In FIG. 28, “straight” indicates the result of estimation by the equation of the straight line, “Right curve” indicates the result of estimation by the equation of the right curve, and “Left curve” indicates the result of estimation by the equation of the left curve. show. FIG. 28 shows that the equation of the right curve is selected, and the degree of agreement between the estimation result and the observation result is high when the equation of the right curve is used. Therefore, FIG. 28, together with the result of FIG. 27, shows that the shape of the road was estimated with high accuracy.
 次に交差点の分類の結果について説明する。交差点の分類とは、具体的には、直線、右ト字、左ト字、丁字路の式を用いて、Virtual Lidarによる観測結果に対する道の分類を行う処理である。交差点の分類とは、より具体的には、対象となる道が直線と右ト字と左ト字と丁字路とのいずれであるかを判定する処理である。図29、図31、図33に俯瞰画像とVLS平面におけるVirtual Lidarの中心から境界までの距離との一例を示す。図30、図32、図34に、直線、右ト字、左ト字、丁字路の式で推定した結果を示す。 Next, the results of the classification of intersections will be explained. Specifically, the classification of intersections is a process of classifying roads for observation results by Visual Lidar using the formulas of straight lines, right-handed characters, left-handed characters, and junctions. More specifically, the classification of intersections is a process of determining whether the target road is a straight line, a right-handed character, a left-handed character, or a junction. 29, 31, and 33 show an example of the bird's-eye view image and the distance from the center of the virtual lidar in the VLS plane to the boundary. 30, FIG. 32, and FIG. 34 show the results estimated by the equations of a straight line, a right-handed character, a left-handed character, and a junction.
 図29は、実施形態における分類の結果の一例を説明する第5の説明図である。図29は、道の形状が右ト字の形状であることを示す。 FIG. 29 is a fifth explanatory diagram illustrating an example of the results of classification in the embodiment. FIG. 29 shows that the shape of the road is a right-handed shape.
 図30は、実施形態における分類の結果の一例を説明する第7の説明図である。より具体的には図30は、図29に示す道に対する分類の結果を示す。図30において“straight”は直線の式による推定の結果を示し、“Left insec”は左ト字の式による推定の結果を示し、“T insec”は丁字路の式による推定の結果を示し、“Right insec”は右ト字の式による推定の結果を示す。図30は、右ト字の式が選択されており、右ト字の式を用いた場合に推定結果と観測結果との一致の度合も高いことを示す。そのため、図29の結果と合わせて図30は、道の形状が高い精度で推定されたことを示す。 FIG. 30 is a seventh explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 30 shows the results of classification for the road shown in FIG. 29. In FIG. 30, "straight" indicates the result of estimation by the straight line formula, "Left insec" indicates the result of estimation by the left T-shaped formula, and "T insec" indicates the result of estimation by the junction formula. "Right insec" indicates the result of estimation by the right-to-character formula. FIG. 30 shows that the right-to-character formula is selected, and the degree of agreement between the estimation result and the observation result is high when the right-to-character formula is used. Therefore, FIG. 30 together with the result of FIG. 29 shows that the shape of the road was estimated with high accuracy.
 図31は、実施形態における分類の結果の一例を説明する第8の説明図である。図31は、道の形状が左ト字の形状であることを示す。 FIG. 31 is an eighth explanatory diagram illustrating an example of the results of classification in the embodiment. FIG. 31 shows that the shape of the road is a left T-shape.
 図32は、実施形態における分類の結果の一例を説明する第9の説明図である。より具体的には図32は、図31に示す道に対する分類の結果を示す。図32において“straight”は直線の式による推定の結果を示し、“Left insec”は左ト字の式による推定の結果を示し、“T insec”は丁字路の式による推定の結果を示し、“Right insec”は右ト字の式による推定の結果を示す。図32は、左ト字の式が選択されており、左ト字の式を用いた場合に推定結果と観測結果との一致の度合も高いことを示す。そのため、図31の結果と合わせて図32は、道の形状が高い精度で推定されたことを示す。 FIG. 32 is a ninth explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 32 shows the results of classification for the road shown in FIG. In FIG. 32, "straight" indicates the result of estimation by the linear equation, "Left insec" indicates the estimation result by the left-character equation, and "T insec" indicates the estimation result by the junction equation. "Right insec" indicates the result of estimation by the right-to-character formula. FIG. 32 shows that the left T-shaped formula is selected, and the degree of agreement between the estimation result and the observation result is high when the left T-shaped formula is used. Therefore, FIG. 32, together with the result of FIG. 31, shows that the shape of the road was estimated with high accuracy.
 図33は、実施形態における分類の結果の一例を説明する第10の説明図である。図33は、道の形状が丁字路の形状であることを示す。 FIG. 33 is a tenth explanatory diagram illustrating an example of the result of classification in the embodiment. FIG. 33 shows that the shape of the road is the shape of a junction.
 図34は、実施形態における分類の結果の一例を説明する第11の説明図である。より具体的には図34は、図33に示す道に対する分類の結果を示す。図34において“straight”は直線の式による推定の結果を示し、“Left insec”は左ト字の式による推定の結果を示し、“T insec”は丁字路の式による推定の結果を示し、“Right insec”は右ト字の式による推定の結果を示す。図34は、丁字路の式が選択されており、丁字路の式を用いた場合に推定結果と観測結果との一致の度合も高いことを示す。そのため、図33の結果と合わせて図34は、道の形状が高い精度で推定されたことを示す。 FIG. 34 is an eleventh explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 34 shows the results of classification for the road shown in FIG. 33. In FIG. 34, "straight" indicates the result of estimation by the linear equation, "Left insec" indicates the estimation result by the left-character equation, and "T insec" indicates the estimation result by the junction equation. "Right insec" indicates the result of estimation by the right-to-character formula. FIG. 34 shows that the equation of the junction is selected, and the degree of agreement between the estimation result and the observation result is high when the equation of the junction is used. Therefore, FIG. 34, together with the result of FIG. 33, shows that the shape of the road was estimated with high accuracy.
 なお、図26のグラフG1と、図28のグラフG3と、図30のグラフG5と、図32のグラフG7と、図34のグラフG9との横軸は視線角度を表し縦軸は距離を表す。図26のグラフG2と、図28のグラフG4と、図30のグラフG6と、図32のグラフG8と、図34のグラフG10との横軸はVLS平面のx軸の座標値を表し縦軸はVLS平面のy軸の座標値を表す。 The horizontal axis of the graph G1 of FIG. 26, the graph G3 of FIG. 28, the graph G5 of FIG. 30, the graph G7 of FIG. 32, and the graph G9 of FIG. 34 represents the line-of-sight angle, and the vertical axis represents the distance. .. The horizontal axis of the graph G2 of FIG. 26, the graph G4 of FIG. 28, the graph G6 of FIG. 30, the graph G8 of FIG. 32, and the graph G10 of FIG. Represents the coordinate value of the y-axis of the VLS plane.
 このように構成された実施形態の自律移動体制御装置1は、領域境界距離情報に基づき参照領域情報を用いて誤差を最小にする条件を決定し、決定した条件から目的情報を取得する。そのため、このように構成された自律移動体制御装置1は、自律移動体9の移動の精度を向上させることができる。 The autonomous mobile control device 1 of the embodiment configured in this way determines the condition for minimizing the error by using the reference area information based on the area boundary distance information, and acquires the target information from the determined condition. Therefore, the autonomous mobile body control device 1 configured in this way can improve the accuracy of the movement of the autonomous mobile body 9.
(変形例)
 なお、撮影装置902の被写体の一部が動的障害物である場合には、道の領域が正しく把握できない可能性がある。このような場合には、動的障害物を無視するフィッティング(すなわち誤差最小化処理)を行うことで領域の推定が行われてもよい。
(Modification example)
If a part of the subject of the photographing apparatus 902 is a dynamic obstacle, the road area may not be correctly grasped. In such a case, the region may be estimated by performing fitting (that is, error minimization processing) that ignores dynamic obstacles.
 図35は、変形例における被写体の一部が動的障害物である場合に、自律移動体制御装置1による動的障害物を無視するフィッティング(具体的には誤差最小化処理)の実行結果の一例を示す図である。図35の横軸は視線角度を表し、図35の縦軸は距離を表す。図35の“deleted data”は動的障害物に対するVirtual Lidarの計測結果の一例である。図35の“true data”は動的障害物ではない被写体のデータである。すなわち、誤差最小化処理において“deleted data”と異なり無視されないデータである。図35の“estimated curve”は動的障害物を無視して、“true data”の結果のみを用いてフィッティング(すなわち誤差最小化処理)を行った結果の一例である。図35は、動的障害物を無視した場合であっても、自律移動体制御装置1によって適切に領域の推定が行われることを示す。 FIG. 35 shows the execution result of fitting (specifically, error minimization processing) in which the dynamic obstacle is ignored by the autonomous mobile control device 1 when a part of the subject in the modified example is a dynamic obstacle. It is a figure which shows an example. The horizontal axis of FIG. 35 represents the line-of-sight angle, and the vertical axis of FIG. 35 represents the distance. “Deleted data” in FIG. 35 is an example of the measurement result of Visual Lidar for a dynamic obstacle. “True data” in FIG. 35 is data of a subject that is not a dynamic obstacle. That is, unlike "deleted data", the data is not ignored in the error minimization process. “Statized curve” in FIG. 35 is an example of the result of fitting (that is, error minimization processing) using only the result of “true data” while ignoring the dynamic obstacle. FIG. 35 shows that the region is appropriately estimated by the autonomous mobile controller 1 even when the dynamic obstacle is ignored.
 なお、第3の具体例の説明等に示されているように、道が交差点である場合には、参照領域表現関数を複数用いた参照領域情報が用いられる。 As shown in the explanation of the third specific example, when the road is an intersection, the reference area information using a plurality of reference area expression functions is used.
 被写体の一部が動的障害物である場合に、自律移動体制御装置1が実行する処理の流れの一例を、図36を用いて説明する。 An example of the flow of processing executed by the autonomous mobile control device 1 when a part of the subject is a dynamic obstacle will be described with reference to FIG. 36.
 図36は、変形例における被写体の一部が動的障害物である場合に、自律移動体制御装置1が実行する処理の流れの一例を示すフローチャートである。 FIG. 36 is a flowchart showing an example of the flow of processing executed by the autonomous mobile control device 1 when a part of the subject in the modified example is a dynamic obstacle.
 進行状態情報取得部が進行状態情報を取得する(ステップS301)。次に領域境界距離情報取得部102が処理対象画像を取得する(ステップS302)。次に、領域境界距離情報取得部102は、予め記憶部13に記録済みの学習済みモデルであって予め定められたカテゴリのいずれに属する画素であるかを判定する学習済みモデルであるセグメンテーションモデルを記憶部13から読み出す(ステップS303)。予め定められたカテゴリは、少なくとも動的障害物を含む。セグメンテーションモデルの生成の一例は図37で説明する。 The progress status information acquisition unit acquires the progress status information (step S301). Next, the area boundary distance information acquisition unit 102 acquires the image to be processed (step S302). Next, the area boundary distance information acquisition unit 102 uses a segmentation model, which is a trained model previously recorded in the storage unit 13 and is a trained model for determining which of the predetermined categories the pixels belong to. Read from the storage unit 13 (step S303). Predetermined categories include at least dynamic obstacles. An example of generating a segmentation model will be described with reference to FIG. 37.
 領域境界距離情報取得部102は、処理対象画像の画素であって予め定められた規則にしたがって選択された画素である対象画素を中心とする画素の画素値を取得する(ステップS304)。次に領域境界距離情報取得部102は、セグメンテーションモデルを用いて、対象画素が属するカテゴリを判定する(ステップS305)。次に、対象画素の属するカテゴリが、記憶部13に記録される(ステップS306)。ステップS305において対象画素に属するカテゴリが動的障害物であると判定された場合には、記憶部13には、対象画素の属するカテゴリとして動的障害物が記録される。ステップS305において動的障害物以外の他のカテゴリ(以下「カテゴリA」という。)が対象画素の属するカテゴリとして判定された場合には、記憶部13には、対象画素の属するカテゴリとしてカテゴリAが記録される。 The area boundary distance information acquisition unit 102 acquires the pixel values of the pixels centered on the target pixels, which are the pixels of the image to be processed and are selected according to a predetermined rule (step S304). Next, the area boundary distance information acquisition unit 102 determines the category to which the target pixel belongs by using the segmentation model (step S305). Next, the category to which the target pixel belongs is recorded in the storage unit 13 (step S306). When it is determined in step S305 that the category belonging to the target pixel is a dynamic obstacle, the storage unit 13 records the dynamic obstacle as the category to which the target pixel belongs. When a category other than the dynamic obstacle (hereinafter referred to as "category A") is determined in step S305 as the category to which the target pixel belongs, the storage unit 13 has the category A as the category to which the target pixel belongs. Recorded.
 ステップS306の次に、領域境界距離情報取得部102は、全ての画素についてカテゴリの判定が行われたか否かを判定する(ステップS307)。未だカテゴリの判定が行われていない画素がある場合(ステップS307:NO)、領域境界距離情報取得部102は、予め定められた規則にしたがい次の対象画素を選択する(ステップS308)。次の対象画素は、例えば現時点の対象画素の隣の画素である。ステップS308の次にステップS304の処理に戻る。 Next to step S306, the area boundary distance information acquisition unit 102 determines whether or not the category has been determined for all the pixels (step S307). When there is a pixel for which the category has not been determined yet (step S307: NO), the area boundary distance information acquisition unit 102 selects the next target pixel according to a predetermined rule (step S308). The next target pixel is, for example, a pixel next to the current target pixel. After step S308, the process returns to step S304.
 一方、全ての画素についてカテゴリの判定が行われた場合(ステップS307:YES)、領域境界距離情報取得部102は、境界画素情報取得処理を実行する(ステップS309)。次に領域境界距離情報取得部102は、処理対象画像の画素の内、ステップS305の処理によって属するカテゴリが動的障害物であると判定された画素以外の画素の値を取得する(ステップS310)。 On the other hand, when the category is determined for all the pixels (step S307: YES), the area boundary distance information acquisition unit 102 executes the boundary pixel information acquisition process (step S309). Next, the area boundary distance information acquisition unit 102 acquires the values of the pixels other than the pixels whose category to which the processing of step S305 belongs is determined to be a dynamic obstacle among the pixels of the image to be processed (step S310). ..
 次に領域境界距離情報取得部102は、ステップS310で取得した値を用いて距離対応付け処理を実行する(ステップS311)。したがってステップS311の処理では、ステップS305の処理によって属するカテゴリが動的障害物であると判定された画素の値は用いられない。 Next, the area boundary distance information acquisition unit 102 executes the distance mapping process using the value acquired in step S310 (step S311). Therefore, in the process of step S311, the value of the pixel whose category is determined to be a dynamic obstacle by the process of step S305 is not used.
 次に領域境界距離情報取得部102は、ステップS311の結果を用いた仮想空間内距離測定処理を実行する(ステップS312)。ステップS312の処理の実行により、領域境界距離情報が得られる。 Next, the area boundary distance information acquisition unit 102 executes a virtual space distance measurement process using the result of step S311 (step S312). By executing the process of step S312, the area boundary distance information is obtained.
 このように、ステップS302~ステップS312の処理の実行により領域境界距離情報取得部102は、動的障害物の情報を削除したうえで、領域境界距離情報を取得する。動的障害物の情報を削除するとは、動的障害物に属すると判定された画素の値を用いないことを意味し、具体的にはステップS310の処理を意味する。 In this way, by executing the processes of steps S302 to S312, the area boundary distance information acquisition unit 102 acquires the area boundary distance information after deleting the information of the dynamic obstacle. Deleting the information of the dynamic obstacle means not using the value of the pixel determined to belong to the dynamic obstacle, and specifically means the processing of step S310.
 ステップS312の次に参照領域情報取得部103が記憶部13に記憶された参照領域情報を取得する(ステップS313)。より具体的には、参照領域情報取得部103が記憶部13に記憶された1又は複数の参照領域表現関数を読み出す。次に目的情報取得部104が、参照領域情報とステップS312で取得された領域境界距離情報とを用いた誤差最小化処理を実行する(ステップS314)。誤差最小化処理は、ステップS313で得られた参照領域情報とステップS312で得られた領域境界距離情報との違いである誤差を最小にする条件を決定する処理である。 Next to step S312, the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13 (step S313). More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13. Next, the target information acquisition unit 104 executes an error minimization process using the reference area information and the area boundary distance information acquired in step S312 (step S314). The error minimization process is a process of determining a condition for minimizing an error, which is a difference between the reference area information obtained in step S313 and the area boundary distance information obtained in step S312.
 次に目的情報取得部104が、目的情報取得処理を実行する(ステップS315)。次に制御信号生成部105が目的情報に基づき自律移動体9の動作を制御する制御信号を生成し、生成した制御信号によって自律移動体9の動作を制御する(ステップS316)。 Next, the target information acquisition unit 104 executes the target information acquisition process (step S315). Next, the control signal generation unit 105 generates a control signal for controlling the operation of the autonomous mobile body 9 based on the target information, and controls the operation of the autonomous mobile body 9 by the generated control signal (step S316).
 なお、ステップS314の処理で実行される誤差最小化処理が、図35やその説明に記載の動的障害物を無視するフィッティングである。そして、動的障害物を無視するフィッティングとは、領域境界距離情報のうち動的障害物までの距離を示すデータを用いないフィッティングを意味する。 The error minimization process executed in the process of step S314 is a fitting that ignores the dynamic obstacles described in FIG. 35 and its description. The fitting that ignores the dynamic obstacle means the fitting that does not use the data indicating the distance to the dynamic obstacle in the area boundary distance information.
 図36が示すように、目的情報取得部104は、参照領域情報と領域境界距離情報との違いである誤差を最小にする条件を決定する誤差最小化処理を実行することで目的情報を取得する。誤差最小化処理では、参照領域情報が用いられる。参照領域情報は、上述したように、対象領域の位置、向き及び形状を表す関数であって1又は複数のパラメータを有する関数である参照領域表現関数を1又は複数用いた情報である。なお上述したように、領域境界距離情報は、自律移動体9から対象領域の境界上の各位置までの距離を示す情報である。 As shown in FIG. 36, the target information acquisition unit 104 acquires target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information. .. Reference area information is used in the error minimization process. As described above, the reference area information is information using one or more reference area expression functions which are functions representing the position, orientation, and shape of the target area and have one or a plurality of parameters. As described above, the area boundary distance information is information indicating the distance from the autonomous mobile body 9 to each position on the boundary of the target area.
 また図36が示すように、領域境界距離情報取得部102は、動的障害物の情報を削除したうえで、領域境界距離情報を取得し、目的情報取得部104は、参照領域情報と領域境界距離情報との違いである誤差を最小にする条件を決定する誤差最小化処理を実行することで目的情報を取得する。 Further, as shown in FIG. 36, the area boundary distance information acquisition unit 102 acquires the area boundary distance information after deleting the information of the dynamic obstacle, and the target information acquisition unit 104 acquires the reference area information and the area boundary. The target information is acquired by executing the error minimization process for determining the condition for minimizing the error, which is the difference from the distance information.
 図37は、変形例におけるセグメンテーションモデルの生成の流れの一例を示すフローチャートである。フローチャートの説明の前に、セグメンテーションモデルの生成の概要を説明する。 FIG. 37 is a flowchart showing an example of the flow of generating the segmentation model in the modified example. Before explaining the flowchart, an outline of generating a segmentation model will be given.
 セグメンテーションモデルは、予め用意された数理モデルであって入力された画像に基づき画像の各画素の属するカテゴリを推定する数理モデル(以下「学習段階モデル」という。)を、機械学習の方法によって更新した結果得られる学習済みの数理モデルである。 The segmentation model is a mathematical model prepared in advance and is an updated mathematical model (hereinafter referred to as "learning stage model") that estimates the category to which each pixel of the image belongs based on the input image by a machine learning method. The resulting trained mathematical model.
 数理モデルは、実行される条件と順番と(以下「実行規則」という。)が予め定められた1又は複数の処理を含む集合である。以下説明の簡単のため、機械学習の方法による数理モデルの更新を学習という。また、数理モデルの更新とは、数理モデルの含むパラメータの値を好適に調整することを意味する。また、数理モデルの実行とは、数理モデルが含む各処理を実行規則にしたがって実行すること意味する。 A mathematical model is a set that includes one or more processes in which the conditions and order of execution (hereinafter referred to as "execution rules") are predetermined. For the sake of simplicity of the explanation below, updating a mathematical model by a machine learning method is called learning. Further, updating the mathematical model means appropriately adjusting the values of the parameters included in the mathematical model. Further, the execution of the mathematical model means that each process included in the mathematical model is executed according to the execution rule.
 学習段階モデルは機械学習の方法によって更新される数理モデルであればどのように構成されてもよい。学習段階モデルは、例えばニューラルネットワークで構成される。学習段階モデルは、例えば畳み込みニューラルネットワークを含むニューラルネットワークで構成されてもよい。学習段階モデルは、例えばオートエンコーダを含むニューラルネットワークで構成されてもよい。 The learning stage model may be configured in any way as long as it is a mathematical model updated by a machine learning method. The learning stage model is composed of, for example, a neural network. The learning stage model may be composed of a neural network including, for example, a convolutional neural network. The learning stage model may be composed of a neural network including, for example, an autoencoder.
 学習段階モデルの学習に用いられる訓練サンプルは、画像と画像の各画素の属するカテゴリを示すアノテーションとの対のデータである。学習段階モデルの更新に用いられる損失関数は、その値が、入力された画像に基づいて推定された各画素のカテゴリと、アノテーションとの違いを示す関数である。アノテーションは、例えばテンソルで表現されたデータである。学習段階モデルの更新は、損失関数の値を小さくするように、所定の規則にしたがい、学習段階モデルの含むパラメータの値を更新することを意味する。 The training sample used for learning the learning stage model is the paired data of the image and the annotation indicating the category to which each pixel of the image belongs. The loss function used to update the learning stage model is a function whose value indicates the difference between the annotation and the category of each pixel estimated based on the input image. Annotations are, for example, data expressed in tensors. Updating the training stage model means updating the values of the parameters included in the training stage model according to a predetermined rule so as to reduce the value of the loss function.
 それでは図37のフローチャートの説明を行う。訓練サンプルが学習段階モデルに入力される(ステップS401)。次に、学習段階モデルの実行により、入力された訓練サンプルの含む画像の各画素についてカテゴリが推定される(ステップS402)。 Then, the flowchart of FIG. 37 will be described. The training sample is input to the learning stage model (step S401). Next, by executing the training stage model, the category is estimated for each pixel of the image including the input training sample (step S402).
 ステップS402の処理によって得られた推定結果に基づき、損失関数の値を小さくするように、学習段階モデルの含むパラメータの値が更新される(ステップS403)。学習段階モデルの含むパラメータの値が更新は、学習段階モデルの更新を意味する。ステップS403の次に、所定の終了条件(以下「学習終了条件」という。)が満たされたか否かが判定される(ステップS404)。学習終了条件は、例えば所定の回数の更新が行われたという条件である。 Based on the estimation result obtained by the process of step S402, the value of the parameter included in the learning stage model is updated so as to reduce the value of the loss function (step S403). When the value of the parameter included in the learning stage model is updated, it means that the learning stage model is updated. Next to step S403, it is determined whether or not a predetermined end condition (hereinafter referred to as “learning end condition”) is satisfied (step S404). The learning end condition is, for example, a condition that a predetermined number of updates have been performed.
 学習終了条件が満たされた場合(ステップS404:YES)、学習段階モデルはセグメンテーションモデルとして記憶部13に記録される(ステップS405)。一方、学習終了条件が満たされない場合(ステップS404:NO)、ステップS402の処理に戻る。学習アルゴリズムによっては、ステップS401の処理に戻り、新たな訓練サンプルが学習段階モデルに入力される。 When the learning end condition is satisfied (step S404: YES), the learning stage model is recorded in the storage unit 13 as a segmentation model (step S405). On the other hand, if the learning end condition is not satisfied (step S404: NO), the process returns to the process of step S402. Depending on the learning algorithm, the process returns to the process of step S401, and a new training sample is input to the learning stage model.
<実験結果>
 自律移動体制御装置1を用いた実験結果の一例を説明する。実験は、直線の道におけるパラメータの推定を目的とした実験であった。パラメータは具体的には、道の傾きと、右の道幅と、左の道幅との3つであった。実験は、条件の異なる第1実験から第3実験までの3つの実験が行われた。第1実験は、撮影装置902として単眼カメラを用いた屋外の実験であった。
<Experimental results>
An example of the experimental result using the autonomous mobile control device 1 will be described. The experiment was aimed at estimating parameters on a straight road. Specifically, there were three parameters: the slope of the road, the width of the road on the right, and the width of the road on the left. As for the experiment, three experiments from the first experiment to the third experiment under different conditions were carried out. The first experiment was an outdoor experiment using a monocular camera as a photographing device 902.
 第1実験では、第1の屋外と第2の屋外との2箇所で実験が行われた。第2実験は、撮影装置902として単眼カメラを用いた屋内の実験であった。第1実験では、第1の屋内と第2の屋内との2箇所で実験が行われた。第3実験は、撮影装置902として2DLiDAR(2 dimensional Light Detection And Ranging)を用いた屋内の実験であった。 In the first experiment, the experiment was conducted at two places, the first outdoor and the second outdoor. The second experiment was an indoor experiment using a monocular camera as the photographing apparatus 902. In the first experiment, the experiment was conducted in two places, the first indoor and the second indoor. The third experiment was an indoor experiment using 2DLiDAR (2dimensional Light Detection And Ringing) as the photographing apparatus 902.
 実験において傾きは、-30度、-20度、-10度、0度、10度、20度、30度であった。実験において道幅は、左右の道幅であった。道幅は実験時に測定された。 In the experiment, the inclinations were -30 degrees, -20 degrees, -10 degrees, 0 degrees, 10 degrees, 20 degrees, and 30 degrees. In the experiment, the road width was the left and right road width. Road width was measured during the experiment.
 図38は、変形例における第1の屋外で行われた第1実験の実験環境を示す図である。図38は、第1の屋外の写真を示す。 FIG. 38 is a diagram showing the experimental environment of the first experiment conducted outdoors in the modified example. FIG. 38 shows a first outdoor photograph.
 図39は、変形例における第1の屋外で行われた第1実験の結果を示す第1の図である。図39は、図38の画像に対するセグメンテーションの結果を俯瞰図に射影した結果と、視野境界距離とを示す。 FIG. 39 is a first diagram showing the results of the first experiment conducted outdoors in the modified example. FIG. 39 shows the result of projecting the result of segmentation on the image of FIG. 38 onto a bird's-eye view, and the visual field boundary distance.
 図40は、変形例における第1の屋外で行われた第1実験の結果を示す第2の図である。図40は、VLS平面において観測値を利用して道の形状が適切に推定できていることを示す。 FIG. 40 is a second diagram showing the results of the first experiment conducted outdoors in the modified example. FIG. 40 shows that the shape of the road can be appropriately estimated using the observed values in the VLS plane.
 図41は、変形例における第1の屋外で行われた第1実験の結果を示す第3の図である。図41は、観測値を利用して道の形状が適切に推定できていることを俯瞰図において示す。 FIG. 41 is a third diagram showing the results of the first experiment conducted outdoors in the modified example. FIG. 41 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
 図42は、変形例における第2の屋外で行われた第1実験の実験環境を示す図である。図42は、第2の屋外の写真を示す。 FIG. 42 is a diagram showing the experimental environment of the first experiment conducted outdoors in the modified example. FIG. 42 shows a second outdoor photograph.
 図43は、変形例における第2の屋外で行われた第1実験の結果を示す第1の図である。図43は、図42の画像に対するセグメンテーションの結果を俯瞰図に射影した結果と、視野境界距離とを示す。 FIG. 43 is a first diagram showing the results of the first experiment conducted outdoors in the modified example. FIG. 43 shows the result of projecting the result of segmentation on the image of FIG. 42 onto a bird's-eye view, and the visual field boundary distance.
 図44は、変形例における第2の屋外で行われた第1実験の結果を示す第2の図である。図44は、VLS平面において観測値を利用して道の形状が適切に推定できていることを示す。 FIG. 44 is a second diagram showing the results of the first experiment conducted outdoors in the modified example. FIG. 44 shows that the shape of the road can be appropriately estimated using the observed values in the VLS plane.
 図45は、変形例における第2の屋外で行われた第1実験の結果を示す第3の図である。図45は、観測値を利用して道の形状が適切に推定できていることを俯瞰図において示す。 FIG. 45 is a third diagram showing the results of the first experiment conducted outdoors in the modified example. FIG. 45 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
 図46は、変形例における第1の屋内で行われた第2実験の実験環境を示す図である。図46は、第1の屋内の写真を示す。 FIG. 46 is a diagram showing the experimental environment of the second experiment conducted indoors in the first modified example. FIG. 46 shows a first indoor photograph.
 図47は、変形例における第1の屋内で行われた第2実験の結果を示す第1の図である。図47は、図46の画像に対するセグメンテーションの結果を俯瞰図に射影した結果と、視野境界距離とを示す。 FIG. 47 is a first diagram showing the results of a second experiment conducted indoors in the first modified example. FIG. 47 shows the result of projecting the result of segmentation on the image of FIG. 46 onto a bird's-eye view, and the visual field boundary distance.
 図48は、変形例における第1の屋内で行われた第2実験の結果を示す第2の図である。図48は、VLS平面において観測値を利用して道の形状が適切に推定できていることを示す。 FIG. 48 is a second diagram showing the results of the second experiment conducted indoors in the first modified example. FIG. 48 shows that the shape of the road can be properly estimated using the observed values in the VLS plane.
 図49は、変形例における第1の屋内で行われた第2実験の結果を示す第3の図である。図49は、観測値を利用して道の形状が適切に推定できていることを俯瞰図において示す。 FIG. 49 is a third diagram showing the results of the second experiment conducted indoors in the first modified example. FIG. 49 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
 図50は、変形例における第2の屋内で行われた第2実験の実験環境を示す図である。図50は、第2の屋内の写真を示す。 FIG. 50 is a diagram showing the experimental environment of the second experiment performed indoors in the modified example. FIG. 50 shows a second indoor photograph.
 図51は、変形例における第2の屋内で行われた第2実験の結果を示す第1の図である。図51は、図50の画像に対するセグメンテーションの結果を俯瞰図に射影した結果と、視野境界距離とを示す。 FIG. 51 is a first diagram showing the results of a second experiment conducted indoors in a modified example. FIG. 51 shows the result of projecting the result of segmentation on the image of FIG. 50 onto a bird's-eye view, and the visual field boundary distance.
 図52は、変形例における第2の屋内で行われた第2実験の結果を示す第2の図である。図52は、VLS平面において観測値を利用して道の形状が適切に推定できていることを示す。 FIG. 52 is a second diagram showing the results of a second experiment conducted indoors in the second modified example. FIG. 52 shows that the shape of the road can be appropriately estimated by using the observed values in the VLS plane.
 図53は、変形例における第2の屋内で行われた第2実験の結果を示す第3の図である。図53は、観測値を利用して道の形状が適切に推定できていることを俯瞰図において示す。 FIG. 53 is a third diagram showing the results of the second experiment conducted indoors in the second modified example. FIG. 53 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
 図54は、変形例における第3実験の実験環境を示す図である。図54は、第3実験が行われた場所の写真を示す。第3実験では、撮影装置902として2DLiDARが用いられた。 FIG. 54 is a diagram showing the experimental environment of the third experiment in the modified example. FIG. 54 shows a photograph of the place where the third experiment was performed. In the third experiment, 2DLiDAR was used as the photographing apparatus 902.
 図55は、変形例における第3実験の結果を示す第1の図である。図55は、VLS平面において観測値を利用して道の形状が適切に推定できていることを示す。これは、道路領域の境界に壁が存在するからである。 FIG. 55 is the first diagram showing the results of the third experiment in the modified example. FIG. 55 shows that the shape of the road can be properly estimated using the observed values in the VLS plane. This is because there is a wall at the boundary of the road area.
 図56は、変形例における第3実験の結果を示す第2の図である。図56は、観測値を利用して道の形状が適切に推定できていることを俯瞰図において示す。これは、道路領域の境界に壁が存在するからである。 FIG. 56 is a second diagram showing the results of the third experiment in the modified example. FIG. 56 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values. This is because there is a wall at the boundary of the road area.
 図57は、第1実験から第2実験の実験結果に基づいて得られた傾き及び道幅の測定結果の精度を示す第1の図である。図57は、屋外での実験の場合、傾きについては7.08度の平均誤差があり、左の道幅については0.670メートルの平均誤差があり、右の道幅については0.634メートルの平均誤差があったことを示す。図57は、屋内での実験の場合、傾きについては6.41度の平均誤差があり、左の道幅については0.363メートルの平均誤差があり、右の道幅については0.356メートルの平均誤差があったことを示す。 FIG. 57 is a first diagram showing the accuracy of the measurement results of the inclination and the road width obtained based on the experimental results of the first experiment to the second experiment. FIG. 57 shows an average error of 7.08 degrees for the slope, an average error of 0.670 meters for the left road width, and an average of 0.634 meters for the right road width in the case of an outdoor experiment. Indicates that there was an error. FIG. 57 shows an indoor experiment with an average error of 6.41 degrees for tilt, an average error of 0.363 meters for the left road width, and an average of 0.356 meters for the right road width. Indicates that there was an error.
 図58は、第1実験から第2実験の実験結果に基づいて得られた傾き及び道幅の測定結果の精度を示す第2の図である。図58は、図57の結果を規格化した結果である。なお、規格化のために用いられた基準の傾きは、-60度から60度であり、基準の道幅は屋外の実験では4.0メートルであり、屋内の実験では1.92メートルであった。 FIG. 58 is a second diagram showing the accuracy of the measurement results of the inclination and the road width obtained based on the experimental results of the first experiment to the second experiment. FIG. 58 is a result of standardizing the result of FIG. 57. The inclination of the standard used for standardization was -60 to 60 degrees, and the standard road width was 4.0 meters in the outdoor experiment and 1.92 meters in the indoor experiment. ..
 図58は、屋外での実験の場合、傾きについては5.9パーセントのエラー率であり、左の道幅については17.6パーセントのエラー率であり、右の道幅については16.8パーセントのエラー率であったことを示す。図58は、屋内での実験の場合、傾きについては5.34パーセントのエラー率であり、左の道幅については9.57パーセントのエラー率であり、右の道幅については9.47パーセントのエラー率であったことを示す。 FIG. 58 shows an outdoor experiment with a 5.9 percent error rate for tilt, a 17.6 percent error rate for the left road width, and a 16.8 percent error rate for the right road width. Indicates that it was a percentage. FIG. 58 shows an indoor experiment with an error rate of 5.34 percent for tilt, a 9.57 percent error rate for the left road width, and a 9.47 percent error rate for the right road width. Indicates that it was a percentage.
 図59は、対照実験の実験環境を示す図である。対照実験では、撮影装置902として2DLiDARが用いられた。図59は、対照実験の実験環境を写した写真の画像である。図59が示すように、対照実験は、第1の屋外で行われた第1実験と似た屋外の環境で行われた。 FIG. 59 is a diagram showing the experimental environment of the control experiment. In the control experiment, 2DLiDAR was used as the imaging device 902. FIG. 59 is an image of a photograph showing the experimental environment of the control experiment. As shown in FIG. 59, the control experiment was performed in an outdoor environment similar to the first experiment performed outdoors.
 図60は、対照実験の実験結果の一例を示す図である。図の横軸は、視野角度[°]を示し、図の縦軸は距離を示す。図のグラフは、対照実験における2DLiDARによる測定の結果である。反射物が存在しない領域の測定結果は0メートルと記録された。 FIG. 60 is a diagram showing an example of the experimental results of the control experiment. The horizontal axis of the figure indicates the viewing angle [°], and the vertical axis of the figure indicates the distance. The graph in the figure is the result of measurement by 2DLiDAR in the control experiment. The measurement result in the area where no reflector was present was recorded as 0 meters.
 図60の結果と第1の屋外で行われた第1実験の結果と比較すると、撮影装置902が2DLiDARである場合には、道の領域を推定できない場合があることがわかる。したがって、撮影装置902が上述した単眼カメラ又は3DLiDARであれば道の領域の推定が可能であるが、2DLiDARである場合には道の領域の推定が不可能な場合がある。これは、2DLiDARの本質的な問題である。 Comparing the result of FIG. 60 with the result of the first experiment conducted outdoors, it can be seen that when the photographing apparatus 902 is 2DLiDAR, the road area may not be estimated. Therefore, if the photographing apparatus 902 is the monocular camera or 3DLiDAR described above, it is possible to estimate the road region, but if it is 2DLiDAR, it may not be possible to estimate the road region. This is an essential problem with 2DLiDAR.
 なお領域境界距離情報は必ずしも撮影装置902から取得される必要は無い。領域境界距離情報は、ネットワーク上のサーバー等の管理装置などのネットワークを介して通信可能に接続された情報処理装置から取得されてもよい。なお処理対象画像は必ずしも撮影装置902から取得される必要は無い。処理対象画像は、ネットワーク上のサーバー等の管理装置などのネットワークを介して通信可能に接続された情報処理装置から取得されてもよい。 The area boundary distance information does not necessarily have to be acquired from the photographing apparatus 902. The area boundary distance information may be acquired from an information processing device that is communicably connected via a network such as a management device such as a server on the network. The image to be processed does not necessarily have to be acquired from the photographing apparatus 902. The image to be processed may be acquired from an information processing device that is communicably connected via a network such as a management device such as a server on the network.
 なお、自律移動体制御装置1は、ネットワークを介して通信可能に接続された複数台の情報処理装置を用いて実装されてもよい。この場合、自律移動体制御装置1が備える各機能部は、複数の情報処理装置に分散して実装されてもよい。 The autonomous mobile control device 1 may be mounted by using a plurality of information processing devices that are communicably connected via a network. In this case, each functional unit included in the autonomous mobile control device 1 may be distributed and mounted in a plurality of information processing devices.
 なお、自律移動体制御装置1の各機能の全て又は一部は、ASIC(Application Specific Integrated Circuit)やPLD(Programmable Logic Device)やFPGA(Field Programmable Gate Array)等のハードウェアを用いて実現されてもよい。プログラムは、コンピュータ読み取り可能な記録媒体に記録されてもよい。コンピュータ読み取り可能な記録媒体とは、例えばフレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置である。プログラムは、電気通信回線を介して送信されてもよい。 All or part of each function of the autonomous mobile control device 1 is realized by using hardware such as ASIC (Application Specific Integrated Circuit), PLD (Programmable Logic Device), and FPGA (Field Programmable Gate Array). May be good. The program may be recorded on a computer-readable recording medium. The computer-readable recording medium is, for example, a flexible disk, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, or a storage device such as a hard disk built in a computer system. The program may be transmitted over a telecommunication line.
 以上、この発明の実施形態について図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、この発明の要旨を逸脱しない範囲の設計等も含まれる。 As described above, the embodiment of the present invention has been described in detail with reference to the drawings, but the specific configuration is not limited to this embodiment, and the design and the like within a range not deviating from the gist of the present invention are also included.
 1…自律移動体制御装置、 10…制御部、 11…入力部、 12…通信部、 13…記憶部、 14…出力部、 101…進行状態情報取得部、 102…領域境界距離情報取得部、 103…参照領域情報取得部、 104…目的情報取得部、 105…制御信号生成部、 9…自律移動体、 900…道、 902…撮影装置、 905…移動体本体 1 ... Autonomous mobile control device, 10 ... Control unit, 11 ... Input unit, 12 ... Communication unit, 13 ... Storage unit, 14 ... Output unit, 101 ... Progress status information acquisition unit, 102 ... Area boundary distance information acquisition unit, 103 ... Reference area information acquisition unit, 104 ... Purpose information acquisition unit, 105 ... Control signal generation unit, 9 ... Autonomous mobile unit, 900 ... Road, 902 ... Imaging device, 905 ... Mobile unit main body

Claims (8)

  1.  制御対象の自律移動体から前記自律移動体が位置する領域である対象領域の境界上の各位置までの距離を示す情報である領域境界距離情報を取得する領域境界距離情報取得部と、
     前記対象領域の位置、向き及び形状の候補を示す参照領域情報と前記領域境界距離情報とに基づき、前記自律移動体と前記対象領域との関係を示す情報である目的情報を取得する目的情報取得部と、
     を備える自律移動体制御装置。
    A region boundary distance information acquisition unit that acquires region boundary distance information, which is information indicating the distance from the controlled autonomous mobile body to each position on the boundary of the target region, which is the region where the autonomous mobile body is located, and
    Acquisition of objective information, which is information indicating the relationship between the autonomous mobile body and the target region, based on the reference region information indicating the position, orientation, and shape candidate of the target region and the region boundary distance information. Department and
    An autonomous mobile control device equipped with.
  2.  前記目的情報取得部は、前記参照領域情報を表現する写像のグラフと前記領域境界距離情報を表現する写像のグラフとの違いである誤差を最小にする条件を決定する処理を実行し、実行結果の条件に基づき前記目的情報を取得する、
     請求項1に記載の自律移動体制御装置。
    The target information acquisition unit executes a process of determining a condition for minimizing an error, which is a difference between the graph of the map expressing the reference area information and the graph of the map expressing the area boundary distance information, and the execution result. Acquire the above-mentioned purpose information based on the conditions of
    The autonomous mobile control device according to claim 1.
  3.  前記参照領域情報は、少なくとも前記自律移動体から見た前記対象領域の状態を表すパラメータに基づいて変化する、
     請求項1又は2に記載の自律移動体制御装置。
    The reference area information changes based on at least a parameter representing the state of the target area as seen from the autonomous mobile body.
    The autonomous mobile control device according to claim 1 or 2.
  4.  前記参照領域情報は、前記対象領域の境界のうち、前記自律移動体に並走し前記自律移動体の向きを向く撮影装置によって撮影されない境界の位置を示す情報を含む、
     請求項1から3のいずれか一項に記載の自律移動体制御装置。
    The reference area information includes information indicating the position of the boundary of the boundary of the target area, which is not photographed by the photographing apparatus that runs parallel to the autonomous moving body and faces the direction of the autonomous moving body.
    The autonomous mobile control device according to any one of claims 1 to 3.
  5.  前記目的情報取得部は、前記参照領域情報と前記領域境界距離情報との違いである誤差を最小にする条件を決定する誤差最小化処理を実行することで前記目的情報を取得し、
     前記誤差最小化処理は、前記対象領域の位置、向き及び形状を表す関数であって1又は複数のパラメータを有する関数である参照領域表現関数を1又は複数用いた前記参照領域情報を用いる、
     請求項1に記載の自律移動体制御装置。
    The target information acquisition unit acquires the target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information.
    The error minimization process uses the reference area information using one or more reference area expression functions which are functions representing the position, orientation, and shape of the target area and have one or a plurality of parameters.
    The autonomous mobile control device according to claim 1.
  6.  前記領域境界距離情報取得部は、動的障害物の情報を削除したうえで、前記領域境界距離情報を取得し、
     前記目的情報取得部は、前記参照領域情報と前記領域境界距離情報との違いである誤差を最小にする条件を決定する誤差最小化処理を実行することで前記目的情報を取得する、
     請求項1に記載の自律移動体制御装置。
    The area boundary distance information acquisition unit acquires the area boundary distance information after deleting the information of the dynamic obstacle.
    The target information acquisition unit acquires the target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information.
    The autonomous mobile control device according to claim 1.
  7.  制御対象の自律移動体から前記自律移動体が位置する領域である対象領域の位置、向き及び形状の候補を示す参照領域情報と前記対象領域の境界上の各位置までの距離を示す情報である領域境界距離情報とに基づき、前記自律移動体と前記対象領域との関係を示す情報である目的情報を取得する目的情報取得ステップ、
     を有する自律移動体制御方法。
    Reference area information indicating a candidate for the position, orientation, and shape of the target area, which is the area where the autonomous moving body is located, and information indicating the distance from each position on the boundary of the target area from the autonomous moving body to be controlled. Objective information acquisition step, which acquires objective information which is information indicating the relationship between the autonomous moving body and the target region based on the region boundary distance information.
    An autonomous mobile control method having.
  8.  請求項1から6のいずれか一項に記載の自律移動体制御装置としてコンピュータを機能させるためのプログラム。 A program for operating a computer as an autonomous mobile control device according to any one of claims 1 to 6.
PCT/JP2021/035631 2020-09-30 2021-09-28 Autonomous moving body control device, autonomous moving body control method, and program WO2022071315A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022554008A JPWO2022071315A1 (en) 2020-09-30 2021-09-28

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-165960 2020-09-30
JP2020165960 2020-09-30

Publications (1)

Publication Number Publication Date
WO2022071315A1 true WO2022071315A1 (en) 2022-04-07

Family

ID=80950378

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/035631 WO2022071315A1 (en) 2020-09-30 2021-09-28 Autonomous moving body control device, autonomous moving body control method, and program

Country Status (2)

Country Link
JP (1) JPWO2022071315A1 (en)
WO (1) WO2022071315A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003170760A (en) * 2001-12-07 2003-06-17 Hitachi Ltd Travel controller for vehicle, and recording medium for map information data
JP2005010891A (en) * 2003-06-17 2005-01-13 Nissan Motor Co Ltd Vehicular road shape recognition system
JP2005332204A (en) * 2004-05-20 2005-12-02 Univ Waseda Movement control device, environment recognition device, and program for controlling moving object
JP2009259215A (en) * 2008-03-18 2009-11-05 Zenrin Co Ltd Road surface marking map generation method
JP2012008999A (en) * 2010-05-26 2012-01-12 Mitsubishi Electric Corp Road shape estimation device, computer program, and road shape estimation method
JP2016224593A (en) * 2015-05-28 2016-12-28 アイシン・エィ・ダブリュ株式会社 Road shape detection system, road shape detection method and computer program
WO2017056247A1 (en) * 2015-09-30 2017-04-06 日産自動車株式会社 Travel control method and travel control device
JP2018200501A (en) * 2017-05-25 2018-12-20 日産自動車株式会社 Lane information output method and lane information output device
JP2019078562A (en) * 2017-10-20 2019-05-23 トヨタ自動車株式会社 Vehicle position estimating device
WO2020075861A1 (en) * 2018-10-12 2020-04-16 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
JP2020076580A (en) * 2018-11-05 2020-05-21 トヨタ自動車株式会社 Axial deviation estimation device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003170760A (en) * 2001-12-07 2003-06-17 Hitachi Ltd Travel controller for vehicle, and recording medium for map information data
JP2005010891A (en) * 2003-06-17 2005-01-13 Nissan Motor Co Ltd Vehicular road shape recognition system
JP2005332204A (en) * 2004-05-20 2005-12-02 Univ Waseda Movement control device, environment recognition device, and program for controlling moving object
JP2009259215A (en) * 2008-03-18 2009-11-05 Zenrin Co Ltd Road surface marking map generation method
JP2012008999A (en) * 2010-05-26 2012-01-12 Mitsubishi Electric Corp Road shape estimation device, computer program, and road shape estimation method
JP2016224593A (en) * 2015-05-28 2016-12-28 アイシン・エィ・ダブリュ株式会社 Road shape detection system, road shape detection method and computer program
WO2017056247A1 (en) * 2015-09-30 2017-04-06 日産自動車株式会社 Travel control method and travel control device
JP2018200501A (en) * 2017-05-25 2018-12-20 日産自動車株式会社 Lane information output method and lane information output device
JP2019078562A (en) * 2017-10-20 2019-05-23 トヨタ自動車株式会社 Vehicle position estimating device
WO2020075861A1 (en) * 2018-10-12 2020-04-16 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
JP2020076580A (en) * 2018-11-05 2020-05-21 トヨタ自動車株式会社 Axial deviation estimation device

Also Published As

Publication number Publication date
JPWO2022071315A1 (en) 2022-04-07

Similar Documents

Publication Publication Date Title
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
Zhang et al. Low-drift and real-time lidar odometry and mapping
WO2021022615A1 (en) Method for generating robot exploration path, and computer device and storage medium
CN112525202A (en) SLAM positioning and navigation method and system based on multi-sensor fusion
JP2020030204A (en) Distance measurement method, program, distance measurement system and movable object
CN111492403A (en) Lidar to camera calibration for generating high definition maps
CN107491070A (en) A kind of method for planning path for mobile robot and device
Xu et al. SLAM of Robot based on the Fusion of Vision and LIDAR
Xiao et al. 3D point cloud registration based on planar surfaces
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
KR101319525B1 (en) System for providing location information of target using mobile robot
CN116349222B (en) Rendering depth-based three-dimensional models using integrated image frames
Kim et al. As-is geometric data collection and 3D visualization through the collaboration between UAV and UGV
WO2022127572A9 (en) Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium
KR101319526B1 (en) Method for providing location information of target using mobile robot
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
WO2022071315A1 (en) Autonomous moving body control device, autonomous moving body control method, and program
WO2018133074A1 (en) Intelligent wheelchair system based on big data and artificial intelligence
JP6603993B2 (en) Image processing apparatus, image processing method, image processing system, and program
Martinez et al. Map-based lane identification and prediction for autonomous vehicles
Sharma et al. Image Acquisition for High Quality Architectural Reconstruction.
WO2022172831A1 (en) Information processing device
Arukgoda Vector Distance Transform Maps for Autonomous Mobile Robot Navigation
KR20230017088A (en) Apparatus and method for estimating uncertainty of image points

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21875616

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022554008

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21875616

Country of ref document: EP

Kind code of ref document: A1