WO2022071315A1 - 自律移動体制御装置、自律移動体制御方法及びプログラム - Google Patents

自律移動体制御装置、自律移動体制御方法及びプログラム Download PDF

Info

Publication number
WO2022071315A1
WO2022071315A1 PCT/JP2021/035631 JP2021035631W WO2022071315A1 WO 2022071315 A1 WO2022071315 A1 WO 2022071315A1 JP 2021035631 W JP2021035631 W JP 2021035631W WO 2022071315 A1 WO2022071315 A1 WO 2022071315A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
area
boundary
distance
target
Prior art date
Application number
PCT/JP2021/035631
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
龍介 宮本
美穂 安達
Original Assignee
学校法人明治大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 学校法人明治大学 filed Critical 学校法人明治大学
Priority to JP2022554008A priority Critical patent/JPWO2022071315A1/ja
Publication of WO2022071315A1 publication Critical patent/WO2022071315A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present invention relates to an autonomous mobile control device, an autonomous mobile control method, and a program.
  • an object of the present invention is to provide a technique for improving the accuracy of movement of an autonomous moving body.
  • One aspect of the present invention is a region boundary distance for acquiring area boundary distance information, which is information indicating a distance from an autonomous moving body to be controlled to each position on the boundary of a target area, which is a region where the autonomous moving body is located.
  • area boundary distance information which is information indicating a distance from an autonomous moving body to be controlled to each position on the boundary of a target area, which is a region where the autonomous moving body is located.
  • the target information which is the information indicating the relationship between the autonomous moving body and the target area is obtained.
  • One aspect of the present invention is the autonomous movement control device, wherein the target information acquisition unit is different from a map graph expressing the reference area information and a map graph expressing the area boundary distance information.
  • the process of determining the condition for minimizing a certain error is executed, and the target information is acquired based on the condition of the execution result.
  • One aspect of the present invention is the autonomous movement control device, wherein the reference area information changes based on at least a parameter representing the state of the target area as seen from the autonomous moving body.
  • One aspect of the present invention is the autonomous movement control device, wherein the reference area information is obtained by a photographing device that runs parallel to the autonomous moving body and faces the direction of the autonomous moving body in the boundary of the target area. Contains information indicating the location of unphotographed boundaries.
  • One aspect of the present invention is the autonomous movement control device, wherein the target information acquisition unit determines a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information.
  • the target information is acquired by executing the conversion process, and the error minimization process is a reference area expression function which is a function representing the position, orientation, and shape of the target area and has one or a plurality of parameters.
  • the reference area information using one or more of the above is used.
  • One aspect of the present invention is the above-mentioned autonomous movement control device, in which the area boundary distance information acquisition unit acquires the area boundary distance information after deleting the information of the dynamic obstacle.
  • the target information acquisition unit acquires the target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information.
  • the autonomous moving body from the autonomous moving body to be controlled to the reference area information indicating the position, orientation, and shape candidate of the target area, which is the area where the autonomous moving body is located, and each position on the boundary of the target area.
  • It is an autonomous moving body control method having a target information acquisition step of acquiring target information which is information indicating the relationship between the autonomous moving body and the target area based on the area boundary distance information which is the information indicating the distance of the above. ..
  • One aspect of the present invention is a program for operating a computer as the above-mentioned autonomous mobile control device.
  • An explanatory diagram illustrating an outline of the autonomous mobile control device 1 of the embodiment The figure which shows an example of the processing target image in an embodiment.
  • FIG. 6 is an example of a result showing a bird's-eye view of the result of segmentation at an intersection and a result showing a visual field boundary distance in the embodiment.
  • the flowchart which shows an example of the flow of the process which the area boundary distance information acquisition part 102 acquires the area boundary distance information in embodiment.
  • the second figure which shows an example of the auxiliary point used for the formulation of the 2nd distance equation in an embodiment.
  • the figure which shows an example of the shape of the T-shaped road in an embodiment.
  • the figure which shows an example of the auxiliary point used for the formulation of the 3rd distance equation in an embodiment.
  • the first figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane.
  • the first figure which shows an example of the result estimated by the formula of a straight line, a right curve, and a left curve in an embodiment.
  • the second figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane.
  • the second figure which shows an example of the result estimated by the formula of a straight line, a right curve, and a left curve in an embodiment.
  • the first figure which shows an example of the bird's-eye view image in an embodiment and the distance from the center to the boundary of a virtual lidar in a VLS plane.
  • the figure which shows an example of the execution result of the error minimization processing which ignores a dynamic obstacle by an autonomous mobile body control apparatus 1 when a part of a subject in a modification is a dynamic obstacle.
  • the flowchart which shows an example of the flow of the process which the autonomous mobile body control apparatus 1 executes when a part of a subject in a modification is a dynamic obstacle.
  • FIG. 3 shows the result of the first experiment conducted outdoors in the modified example.
  • the figure which shows the experimental environment of the 2nd experiment performed in the 1st room in the modified example The first figure which shows the result of the 2nd experiment performed in the 1st room in the modified example.
  • the figure which shows the experimental environment of the 2nd experiment performed in the 2nd indoor in the modified example The first figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modification.
  • the third figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modified example The first figure which shows the result of the 2nd experiment performed in the 2nd indoor in the modification.
  • the figure which shows the experimental environment of the 3rd experiment in the modification The first figure which shows the result of the 3rd experiment in the modification.
  • the second figure which shows the result of the 3rd experiment in the modification The first figure which shows the accuracy of the measurement result of the inclination and the road width obtained based on the experimental result of the 1st experiment to the 3rd experiment.
  • FIG. 1 is an explanatory diagram illustrating an outline of the autonomous mobile control device 1 of the embodiment.
  • the autonomous mobile body control device 1 controls the movement of the autonomous mobile body 9 to be controlled.
  • the autonomous moving body 9 is a moving body that moves autonomously, such as a robot or an automobile that moves autonomously.
  • the autonomous mobile body control device 1 acquires area boundary distance information for each position of the autonomous mobile body 9, and based on the reference area information and the acquired area boundary distance information, the autonomous mobile body 9 and the target area Acquire information indicating the relationship (hereinafter referred to as "purpose information").
  • the target area means an area in the space where the autonomous mobile body 9 is located.
  • Region means a region in space. The region is, for example, a path on which the autonomous mobile 9 can travel.
  • the road 900 is an example of the target area.
  • the arrow 901 indicates the direction in which the autonomous mobile body 9 advances.
  • the photographing device 902 is a photographing device that runs in parallel with the autonomous moving body 9 and faces the same direction as the autonomous moving body 9.
  • the photographing apparatus 902 is, for example, 3DLiDAR (3dimensional Light Detection And Ringing).
  • the photographing device 902 may be a monocular camera.
  • the photographing device 902 may be provided by the autonomous moving body 9, or may be provided by another moving body such as a drone running in parallel with the autonomous moving body 9.
  • FIG. 1 shows a case where the autonomous moving body 9 includes a photographing device 902 as an example. Since the photographing device 902 runs in parallel with the autonomous moving body 9 and faces the direction of the autonomous moving body 9, the direction seen from the autonomous moving body 9 is the direction seen from the photographing device 902. Further, since the photographing device 902 runs in parallel with the autonomous moving body 9 and is at the same position or a certain distance as the autonomous moving body 9, the distance from the autonomous moving body 9 to the object is to the object seen from the photographing device 902. Is the distance.
  • the target information indicates, for example, where the autonomous mobile body 9 is located within the area width of the target area.
  • the target information indicates, for example, the relationship between the direction of the target area and the direction of the autonomous mobile body 9.
  • the target information indicates whether or not the target area is an intersection at the position of the autonomous mobile body 9, for example.
  • the target information indicates the direction of each region that intersects at the intersection, for example, when the target region is an intersection at the position of the autonomous mobile body 9.
  • the area boundary distance information is information indicating the distance from the autonomous moving body 9 to each position on the boundary of the target area (hereinafter referred to as "region boundary distance").
  • the area boundary distance information is located, for example, at the boundary of the road from the autonomous moving body 9 in each direction seen from the autonomous moving body 9 (hereinafter referred to as "line-of-sight direction") centered on the autonomous moving body 9. This is information indicating the distance to the subject as the area boundary distance.
  • the area boundary distance information is information displayed as a graph showing the area boundary distance for each line-of-sight direction, for example.
  • the subject is, for example, a shield.
  • the area boundary distance information is, for example, a measurement result by 3DLiDAR when the photographing device 902 is 3DLiDAR (3dimensional Light Detection And Ringing). That is, when the photographing apparatus 902 is a 3D LiDAR, the region boundary distance is the distance of the signal. When the photographing apparatus 902 is a 3D LiDAR, the line-of-sight direction is the direction seen from the 3D LiDAR.
  • the area boundary distance information may be acquired by calculation using a distance image obtained in advance and a machine learning result learned in advance, based on an image taken by a monocular camera provided in the autonomous moving body 9, for example.
  • the distance image is, for example, the result of photographing a horizontal plane.
  • the distance image may be a result calculated based on the internal parameters of the camera as well as the actually observed data.
  • the area boundary distance information is based on the imaged result by the photographing device 902, for example, the distance image obtained in advance by the autonomous moving body control device 1 and the result of machine learning learned in advance. This is the result obtained by calculation using.
  • the result of machine learning that has been learned in advance is, specifically, a process of determining a pixel indicating a region in which the autonomous moving body 9 can move from the shooting result of a monocular camera such as semantic segmentation.
  • the photographing apparatus 902 is a monocular camera
  • the line-of-sight direction is the direction seen from the monocular camera.
  • Reference area information is information that represents candidates for the position, orientation, and shape of the target area.
  • the reference area information is specifically a function representing the position, orientation, and shape of the target area and having one or a plurality of parameters (hereinafter referred to as "reference area expression function"). That is, the reference area information is specifically a mathematical model that represents a candidate for the position, orientation, and shape of the target area.
  • the parameter defines the shape of the function representing the shape of the target area, and is a parameter related to the shape of the target area such as the width of the target area.
  • the reference area representation function is, for example, a function showing the correspondence between the area boundary distance and the line-of-sight angle, and is a function including one or a plurality of parameters.
  • the line-of-sight angle is an angle indicating each line-of-sight direction in a predetermined plane.
  • the parameters include at least a parameter representing the state of the target area as seen from the autonomous mobile body 9.
  • the autonomous mobile control device 1 uses a reference area expression function based on a parameter representing at least the state of the target area as seen from the autonomous mobile body 9. To change. That is, the reference area information changes at least based on the parameter representing the state of the target area as seen from the autonomous mobile body 9.
  • the state of the target area seen from the autonomous moving body 9 is, for example, the inclination, width, and length of the target area seen from the autonomous moving body 9.
  • the reference area expression function does not have to represent only one target area without a branch, and may represent an area with a branch.
  • the reference area representation function may represent one road or a branched road including a branch.
  • the autonomous mobile control device 1 will be described by taking the case where the reference area representation function represents one path as an example.
  • the image 903 is a figure represented on a bird's-eye view as an example of the shape represented by the reference area information. In the figure shown in the image 903, there is a notch corresponding to the field of view of the photographing apparatus 902 in the vicinity of the lower apex of the parallelogram.
  • the process of acquiring the target information based on the reference area information and the area boundary distance information (hereinafter referred to as "extraction process") by the autonomous mobile control device 1 may include an error minimization process and a target information acquisition process. desirable.
  • the target information acquisition process is executed after the error minimization process is executed.
  • the error minimization process determines the value of the parameter that gives the minimum value of the difference between the graph of the map expressing the reference area information and the graph of the map expressing the acquired area boundary distance information (hereinafter referred to as "error"). It is an optimization process to be performed. That is, the error minimization process is a process of determining the condition for minimizing the error.
  • the value determined as the value of the parameter that gives the minimum value of the error by the error minimization process is referred to as a determined value.
  • the reference area expression function specified by the determined value is referred to as a determined function.
  • the definition of a so-called general map graph may be used.
  • the objective information acquisition process is a process of acquiring objective information based on the determined function.
  • the purpose information acquisition process is, for example, a process of acquiring two peak positions indicated by a determined function as road edges in a target area.
  • image 904 in FIG. 1 is a diagram showing an example of the result of error minimization processing. The details of the image 904 will be described with reference to FIG. 6 after explaining the virtual lidar processing (hereinafter referred to as “VLS processing”) and one of the specific examples of the error minimizing processing.
  • VLS processing virtual lidar processing
  • the details of the VLS processing will be described by exemplifying the case where the photographing apparatus 902 is a monocular camera provided in the autonomous mobile body 9.
  • the area boundary distance information is obtained by calculation using the distance image obtained in advance and the result of machine learning learned in advance based on the image taken by the photographing apparatus 902 (hereinafter referred to as “processed image”). It is an example of technology.
  • the VLS processing is a technique used by the autonomous mobile control device 1.
  • the VLS process includes an area division process, a distance mapping process, a boundary pixel information acquisition process, and a distance measurement process in virtual space.
  • the area division processing, the distance mapping processing, and the boundary surface pixel information acquisition processing are executed before the execution of the virtual space distance determination processing, and the area division processing is executed before the execution of the boundary surface pixel information acquisition processing.
  • the boundary pixel information acquisition processing is executed after the execution of the area division processing and the distance mapping processing, and then the virtual space distance measurement processing is executed.
  • the timing of execution of the area division process and the distance mapping process may be earlier or simultaneous. Either the boundary mapping process and the boundary pixel information acquisition process may be executed first or at the same time.
  • the autonomous mobile control device 1 obtains information (hereinafter referred to as "distinguishing information") for distinguishing each area reflected in the image to be processed from other areas by executing the area division processing. For example, by executing the area division processing by the autonomous mobile control device 1, information for distinguishing the target area reflected in the processing target image from other areas can be obtained.
  • FIG. 2 is a diagram showing an example of a processing target image in the embodiment.
  • the image of FIG. 2 shows a road from the lower right to the upper left of the image as a target area.
  • one of the road ends of the road, which is the target area, is the boundary with the lawn.
  • FIG. 3 is a diagram showing an example of the result of the area division processing in the embodiment. More specifically, FIG. 3 is a diagram showing an example of the result of segmentation for the image to be processed. In FIG. 3, the target area reflected in the processing target image is represented separately from other areas reflected in the processing target image.
  • the distance mapping process is a pixel previously associated with each pixel of the processing target image, and the distance information indicated by the pixel of the distance image is obtained from each pixel of the processing target image to which each pixel of the distance image corresponds.
  • This is a process acquired by the autonomous moving body control device 1 as information indicating the attribute of.
  • information indicating the attributes of each pixel of the processing target image to which each pixel of the distance image corresponds is referred to as plane pixel distance information.
  • the plane pixel distance information is acquired under the assumption that all the images reflected in the image to be processed are on the horizontal plane.
  • the distance indicated by each pixel of the distance image is information indicating the distance from the photographing device 902 to the image captured by each pixel of the distance image. That is, since the distance image is an image of the scenery seen by the photographing device 902, the distance indicated by each pixel of the distance image is information indicating the distance from the autonomous moving body 9 to the image captured by each pixel of the distance image.
  • FIG. 4 is a diagram showing an example of a distance image in the embodiment.
  • the distance image of FIG. 4 is a distance image obtained by photographing the horizontal plane by the photographing apparatus 902 whose line of sight is parallel to the horizontal plane.
  • the lighter the color the shorter the distance to the photographing apparatus 902.
  • the boundary pixel information acquisition process is a process in which the autonomous moving body control device 1 acquires information indicating a pixel (hereinafter referred to as “boundary pixel information”) that represents a boundary between regions among the pixels of the image to be processed, based on the distinction information. Is.
  • the distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the distance from the photographing device 902 to the image captured by each pixel indicated by the area pixel information based on the boundary pixel information and the plane pixel distance information. Therefore, the distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the area boundary distance information based on the boundary pixel information and the plane pixel distance information.
  • the distance measurement process in the virtual space is a process in which the autonomous moving body control device 1 acquires the distance from the origin to the boundary of the area reflected in the image to be processed on the VLS plane by calculation, for example.
  • the VLS plane is a virtual space having a two-dimensional coordinate system centered on the position of the autonomous moving body 9 (that is, the position of the photographing device 902), and the image reflected in each pixel indicated by the boundary pixel information is the plane pixel distance information. It is a virtual space installed at a position away from the autonomous moving body 9 by the distance indicated by.
  • the origin in the VLS plane is the position in the virtual space where the autonomous mobile body 9 is located.
  • the measurement in the VLS plane is an autonomous moving object that skips the LiDAR signal from the origin in the VLS plane, calculates the time until the scattering or reflection of the LiDAR signal returns to the origin, and converts the calculated time into a distance. This is a process executed by the control device 1 by calculation. Therefore, the result of the measurement on the VLS plane is an example of the region boundary distance information.
  • FIG. 5 is a diagram showing an example of the result of projecting the result of the segmentation shown in FIG. 3 on the VLS plane in the embodiment.
  • the horizontal and vertical axes of the results of FIG. 5 indicate the axes of the Galileo coordinate system.
  • the position where the value on the horizontal axis of FIG. 5 is 0 and the value on the vertical axis is 0 is the position of the photographing apparatus 902 (that is, the position of the autonomous moving body 9).
  • the boundary of the trapezoidal region A1 in FIG. 5 represents the boundary of the viewing angle of the photographing apparatus 902.
  • VLSroad is the distance from the origin in the VLS plane to the boundary of the reference area information.
  • the VLSparallelogram is an approximate shape from the origin when the shape of a target area such as a road is approximated by a predetermined shape such as a parallelogram (hereinafter referred to as "approximate shape") without considering the boundary of the field of view of the photographing apparatus 902.
  • the distance from the origin in the VLS plane to the boundary More specifically, the approximate shape is a shape that approximates the shape of the target area with a predetermined shape without considering the boundary of the field of view of the photographing apparatus 902, and is a shape when the shape of the target area is expressed on a bird's-eye view. It is a figure that approximates.
  • the approximate shape is, for example, a parallelogram.
  • VLSmap represents the distance from the origin in the VLS plane to the boundary of the field of view of the photographing apparatus 902.
  • VLSmap is an external parameter of a monocular camera, ae. param, internal parameters of monocular camera ai. It is calculated based on param and the measurement range range of the map.
  • External parameters of the monocular camera ae. Specifically, param is, for example, the position and posture of a monocular camera.
  • the VLSroad has a slope ⁇ angle of the target area, a length ⁇ length of the target area, and a width ⁇ l on the left side of the area. width, right width ⁇ r. It depends on width.
  • the internal parameters of the monocular camera ae. Param is expressed as ⁇
  • the external parameter of the monocular camera is expressed as A
  • the line-of-sight angle is expressed as ⁇ .
  • VLSroad is formulated by the following equations (2) to (4).
  • VLSroad formulated by the equations (2) to (4) is an example of reference area information.
  • a process of estimating the value of the parameter included in the VLSroad is performed.
  • the parameter values are estimated by optimization that minimizes the error by the method of least squares.
  • the result of optimization by the least squares method is the coefficient of determination.
  • Xi represents the line-of-sight angle in the VLS plane
  • y represents the distance from the origin in the VLS plane at the line-of-sight angle xi.
  • FIG. 6 shows an example of the result of error minimization processing using the equations (1) to (6).
  • FIG. 6 is a diagram showing an example of the result of the error minimization processing in the embodiment.
  • the horizontal axis of FIG. 6 indicates the line-of-sight angle.
  • the vertical axis of FIG. 6 represents a distance.
  • the unit is the unit of distance.
  • FIG. 6 shows an example of the area boundary distance information and an example of the result of the error minimization processing.
  • FIG. 6 shows that the result of the error minimization process matches the graph shown by the region boundary distance information with high accuracy.
  • the results in FIG. 6 show that there are peaks at a line-of-sight angle of 140 ° and a line-of-sight angle of 170 °.
  • the line-of-sight angle of 140 ° and the line-of-sight angle of 170 ° indicate the ends of regions such as the shoulder of the road, respectively. Therefore, the line-of-sight angle indicating the centers of the two peaks is the angle indicating the center of the target area.
  • FIG. 7 is a diagram showing an example of purpose information in the embodiment.
  • the vertical axis of FIG. 7 represents the distance from the autonomous mobile body 9.
  • the unit is the unit of distance.
  • the horizontal axis of FIG. 7 indicates each line-of-sight direction in the horizontal plane. More specifically, the horizontal axis of FIG. 7 indicates an angle indicating each line-of-sight direction in the horizontal plane (that is, a line-of-sight angle in the horizontal plane). Therefore, when the photographing apparatus 902 is a 3D LiDAR, the horizontal axis in FIG. 7 indicates the measurement angle. In FIG. 7, the traveling direction of the autonomous moving body 9 is 180 °.
  • FIG. 7 shows the boundary of the road under ideal conditions and the boundary of the field of view of the photographing apparatus 902. More specifically, in FIG. 7, the "distance to the boundary of the field of view” is information indicating the boundary of the field of view, and is the distance from the photographing device 902 to the boundary of the field of view of the photographing device 902 (hereinafter referred to as "field of view boundary distance"). .) Is shown.
  • the “ideal conditions” are information on each boundary shown in FIG. 8 described later (specifically, "distance to the boundary of the visual field" in FIG.
  • FIG. 7 is a diagram showing an example of the target information in the embodiment, and corresponds to the plan view shown in FIG.
  • FIG. 7 “thing that does not consider the boundary of the visual field” is an example of the result of displaying the boundary of an approximate shape such as a parallelogram in a graph with the horizontal axis as the line-of-sight angle and the vertical axis as the distance.
  • an example of target information is an example of the result of the target information acquisition process, and is an example of information indicating the direction of the target area.
  • FIG. 7 is also a diagram showing the center of the target area in the range of the line-of-sight angle of 120 ° to 260 °.
  • the one considering the boundary of the field of view is a function indicating the boundary of the shape in which a notch corresponding to the field of view of the photographing apparatus 902 exists near the lower vertex of the approximate shape such as a parallelogram.
  • the result of the error optimization processing used as the reference area representation function is shown.
  • the notch is an example of an out-of-field boundary.
  • the shape in which the notch corresponding to the field of view of the photographing apparatus 902 exists near the lower apex of the parallelogram is, for example, the shape shown in the image 903.
  • FIG. 8 is a diagram showing an example of an out-of-field boundary in the embodiment.
  • the position where the value on the horizontal axis is 0 and the value on the vertical axis is 0 is the position of the photographing apparatus 902 on the VLS plane.
  • FIG. 8 shows an example of the boundary of the field of view of the photographing apparatus 902 in the VLS plane.
  • FIG. 8 shows an example of the shape represented by the reference region representation function in the VLS plane.
  • FIG. 8 shows a parallelogram having a notch as an example of the shape represented by the reference region representation function.
  • the boundary represented by the alternate long and short dash line in FIG. 8 is an example of an out-of-field boundary. Note that FIG. 8 shows the VLS plane.
  • FIG. 8 represents the distance from the position of the photographing apparatus 902 on the VLS plane.
  • the vertical axis of FIG. 8 is the distance from the position of the photographing apparatus 902 on the VLS plane and represents the distance in the direction orthogonal to the horizontal axis of FIG.
  • the approximate shape is not limited to the parallelogram. That is, the shape represented by the reference area representation function is not limited to a parallelogram or a parallelogram having a notch. Other specific examples of the shape represented by the reference area representation function will be described later in consideration of the ease of explanation.
  • the process of acquiring the number of branches by the extraction process is, for example, the following process.
  • an error minimization process using one function expressed by using K reference area expression functions (K is an integer of 1 or more) is executed for each K.
  • the value of K having the smallest error is acquired as the number of reference regions (that is, the number of branches).
  • M in the equation (7) indicates the number of parameters used for estimation, and M is the product of the number of reference regions (K) and the number of parameters of the reference region expression function.
  • the process of acquiring the value of K is an example of the object information acquisition process.
  • the position of the autonomous moving body 9 is the position of the intersection. In this way, the extraction process acquires information indicating whether or not the position of the autonomous mobile body 9 is an intersection.
  • a specific example of the error in the process of acquiring the number of branches by the extraction process is, for example, the Bayesian Information Criterion (BIC) represented by the following equation (7).
  • L represents the likelihood and N represents the number of samples for which the region boundary distance was observed. Therefore, N is, for example, the number of points indicating the area boundary distance information in FIG.
  • Likelihood is, for example, SSE, which is the total value of the root-mean-squared error between the region boundary distance and the result of optimization.
  • each of the reference area expression functions included in the compound function is not necessarily the same reference area expression. It does not have to be a function, and at least one may differ from the other reference area representation functions.
  • FIG. 9 is a diagram showing an example of the result of error minimum processing when there is an intersection in the embodiment.
  • the horizontal axis of FIG. 9 indicates the line-of-sight angle.
  • the vertical axis of FIG. 9 represents the distance.
  • the unit is the unit of distance.
  • FIG. 9 shows the result of the error minimization process executed under the condition that K is 1, and the result of the error minimization process executed under the condition that K is 2. Further, FIG. 9 shows the result of actually surveying the end of the road as a data point of the survey result.
  • FIG. 10 is a diagram showing an example of the result of intersection segmentation in the embodiment.
  • FIG. 10 shows that it branches into two roads.
  • FIG. 11 is an example of a result showing a bird's-eye view of the result of segmentation at an intersection and a result showing a visual field boundary distance in the embodiment.
  • the upper figure of FIG. 11 shows an example of the result of expressing the result of the segmentation shown in FIG. 9 on a bird's-eye view.
  • the lower diagram of FIG. 11 shows the region boundary distance information obtained from the upper diagram of FIG.
  • FIG. 12 is a diagram showing an example of the functional configuration of the autonomous mobile control device 1 of the embodiment.
  • the autonomous mobile control device 1 includes a control unit 10 including a processor 91 such as a CPU (Central Processing Unit) connected by a bus and a memory 92, and executes a program.
  • the autonomous mobile control device 1 functions as a device including a control unit 10, an input unit 11, a communication unit 12, a storage unit 13, and an output unit 14 by executing a program. More specifically, the processor 91 reads out the program stored in the storage unit 13, and stores the read program in the memory 92. By executing the program stored in the memory 92 by the processor 91, the autonomous mobile control device 1 functions as a device including a control unit 10, an input unit 11, a communication unit 12, a storage unit 13, and an output unit 14. ..
  • a control unit 10 including a processor 91 such as a CPU (Central Processing Unit) connected by a bus and a memory 92, and executes a program.
  • the autonomous mobile control device 1
  • the control unit 10 executes, for example, an extraction process.
  • the control unit 10 controls, for example, the operation of various functional units included in the autonomous mobile body control device 1 and the operation of the autonomous mobile body 9.
  • the control unit 10 controls the operation of the communication unit 12, for example, and acquires the image to be processed via the communication unit 12.
  • the control unit 10 acquires area boundary distance information based on, for example, the acquired image to be processed.
  • the control unit 10 may acquire the area boundary distance information instead of the image to be processed.
  • the control unit 10 executes, for example, an extraction process.
  • the control unit 10 controls the operation of the autonomous mobile body 9 via, for example, the communication unit 12.
  • the control unit 10 may acquire information indicating the position and orientation of the autonomous mobile body 9 (hereinafter referred to as “progress state information”) via, for example, the communication unit 12.
  • the control unit 10 may estimate the position and orientation of the autonomous mobile body 9 based on, for example, the history of control of the operation of the autonomous mobile body 9.
  • the input unit 11 includes an input device such as a mouse, a keyboard, and a touch panel.
  • the input unit 11 may be configured as an interface for connecting these input devices to its own device.
  • the input unit 11 receives input of various information to its own device.
  • the communication unit 12 includes a communication interface for connecting the own device to an external device.
  • the communication unit 12 communicates with the autonomous mobile body 9 via wire or wireless.
  • the communication unit 12 receives, for example, progress information of the autonomous mobile body 9 by communicating with the autonomous mobile body 9.
  • the communication unit 12 transmits a control signal for controlling the autonomous mobile body 9 to the autonomous mobile body 9 by communicating with the autonomous mobile body 9.
  • the communication unit 12 communicates with the source of the image to be processed via wired or wireless.
  • the communication unit 12 acquires the image to be processed by communicating with the source of the image to be processed.
  • the source of the image to be processed may be the autonomous moving body 9 itself, or may be another device such as a drone that moves together with the autonomous moving body 9.
  • the storage unit 13 is configured by using a non-temporary computer-readable storage medium device such as a magnetic hard disk device or a semiconductor storage device.
  • the storage unit 13 stores various information about the autonomous mobile control device 1.
  • the storage unit 13 stores, for example, the history of control of the autonomous mobile body 9 by the control unit 10.
  • the storage unit 13 stores, for example, a history of progress information.
  • the storage unit 13 stores the reference area information in advance.
  • the storage unit 13 stores a distance image in advance.
  • the output unit 14 outputs various information.
  • the output unit 14 includes display devices such as a CRT (Cathode Ray Tube) display, a liquid crystal display, and an organic EL (Electro-Luminescence) display.
  • the output unit 14 may be configured as an interface for connecting these display devices to its own device.
  • the output unit 14 outputs, for example, the information input to the input unit 11 or the communication unit 12.
  • the output unit 14 outputs, for example, the execution result of the extraction process by the control unit 10.
  • FIG. 13 is a diagram showing an example of the functional configuration of the control unit 10 in the embodiment.
  • the control unit 10 includes a progress state information acquisition unit 101, a region boundary distance information acquisition unit 102, a reference area information acquisition unit 103, a target information acquisition unit 104, and a control signal generation unit 105.
  • the progress status information acquisition unit 101 acquires the progress status information of the autonomous mobile body 9.
  • the progress state information acquisition unit 101 may acquire the progress state information by calculating from the history of control of the operation of the autonomous mobile body 9, or may acquire the progress state information from the autonomous mobile body 9 via the communication unit 12. good.
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information.
  • the area boundary distance information is the area boundary distance information from the source of the area boundary distance information of the photographing device 902 or the like via the communication unit 12 when the photographing device 902 is a device capable of acquiring the area boundary distance information such as 3DLiDAR. To get.
  • the photographing device 902 is a device for acquiring a processing target image such as a monocular camera
  • the region boundary distance information acquisition unit 102 acquires the processing target image via the communication unit 12, and the VLS for the acquired processing target image.
  • the area boundary distance information is acquired by executing the process.
  • the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13. More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13.
  • the target information acquisition unit 104 executes the extraction process and acquires the target information.
  • the control signal generation unit 105 generates a control signal that controls the operation of the autonomous mobile body 9 based on the target information.
  • the control signal generation unit 105 transmits the generated control signal to the autonomous mobile body 9 via the communication unit 12.
  • FIG. 14 is a diagram showing an example of a flow of processing executed by the autonomous mobile control device 1 of the embodiment. The process of FIG. 14 is repeatedly executed at a predetermined timing.
  • Progress status information acquisition unit 101 acquires progress status information (step S101).
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information (step S102).
  • the area boundary distance information acquisition unit 102 may acquire the area boundary distance information from the source of the area boundary distance information via the communication unit 12, or acquires and acquires the processing target image via the communication unit 12. Area boundary distance information may be acquired by executing VLS processing on the image to be processed.
  • the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13. More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13 (step S103). Next, the target information acquisition unit 104 executes an extraction process using the reference area information and the area boundary distance information to acquire the target information (step S104). Next, the control signal generation unit 105 generates a control signal for controlling the operation of the autonomous mobile body 9 based on the target information, and controls the operation of the autonomous mobile body 9 by the generated control signal (step S105).
  • step S101 may be executed at any timing before the process of step S105 is executed. Further, steps S102 and S103 do not necessarily have to be executed in this order, and may be executed in any order as long as they are executed before the execution of step S104.
  • the autonomous mobile control device 1 executes the step of acquiring the area boundary distance information. Further, the autonomous mobile body control device 1 executes a step of acquiring target information which is information indicating the relationship between the autonomous mobile body 9 and the target area based on the reference area information and the area boundary distance information.
  • FIG. 15 is a flowchart showing an example of the flow of the process in which the area boundary distance information acquisition unit 102 in the embodiment acquires the area boundary distance information. More specifically, FIG. 15 is a flowchart showing an example of a flow of processing in which the area boundary distance information acquisition unit 102 acquires the area boundary distance information when the photographing device 902 is a monocular camera.
  • the area boundary distance information acquisition unit 102 acquires the image to be processed via the communication unit 12 (step S201). Next, the area boundary distance information acquisition unit 102 executes the area division process (step S202). Next, the area boundary distance information acquisition unit 102 executes the boundary pixel information acquisition process (step S203). Next, the area boundary distance information acquisition unit 102 executes the distance mapping process (step S204). Next, the area boundary distance information acquisition unit 102 executes the distance measurement process in the virtual space (step S205). If the processes of step S203 and step S204 are executed after the execution of step S202 and before the execution of step S205, they may be executed at any timing. Therefore, for example, step S204 may be executed after step S202, and then step S203 may be executed. Further, for example, step S203 and step S204 may be executed at the same timing.
  • the area boundary distance information acquisition unit 102 In the process of acquiring the area boundary distance information by the area boundary distance information acquisition unit 102, when the area boundary distance information is input from the external device via the communication unit 12, the area boundary input to the communication unit 12 is performed. This is a process in which the area boundary distance information acquisition unit 102 acquires the distance information.
  • FIG. 16 is an explanatory diagram illustrating an example of the relationship between the moving body main body 905 of the autonomous moving body 9, the photographing device 902, and the horizontal plane in the embodiment.
  • the mobile body 905 includes wheels for moving the autonomous mobile body 9, a movable portion, and a control unit for controlling the movement.
  • the autonomous mobile body 9 includes a mobile body main body 905 and a photographing device 902.
  • the photographing apparatus 902 is located above the moving body main body 905, and is located at a height h from the horizontal plane on which the moving body main body 905 is located.
  • the tilt angle means the angle (angle with the horizontal plane) forming the optical axis of the camera with the vertical downward direction.
  • the dashed line means the edge of the field of view.
  • parameters in the VLS plane generated based on the image to be processed are shown. These parameters are examples of parameters.
  • the parameters appearing in the description of the specific example of the reference area representation function are examples of parameters.
  • FIG. 17 is a diagram showing an example of parameters used in the formula representing the distance in the embodiment.
  • the y-axis represents the front direction of the camera.
  • the x-axis is the direction perpendicular to the y-axis in the VLS plane.
  • the shape of the boundary of the field of view of the camera on the VLS plane is a trapezoidal shape having the position of the camera as the base and the distance m (map_height) reflected on the VLS plane from the image to be processed as the height.
  • the tilt of the left and right sides is set by the internal parameters of the camera.
  • the center of the virtual lidar is located in the VLS plane at a position where the position in the y direction is a distance y0 from the origin and a position in the x direction is a position where the distance x0 is from the origin.
  • the virtual lidar is a source of the signal of the virtual lidar in the VLS plane.
  • the first specific example is a specific example of an equation (hereinafter referred to as "first distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a straight line.
  • first distance equation an equation expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a straight line.
  • the first distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
  • the shape of the road is the slope of the road ⁇ angle, the length of the road ⁇ length, and the width on the left side of the road ⁇ l. width, right width of the road ⁇ r. It is formulated using each parameter of width. That is, the shape of the road is expressed by the above-mentioned equation (3).
  • FIG. 18 is an explanatory diagram illustrating parameters used for formulating the shape of a road whose shape is straight in the embodiment.
  • the length of the road is the distance from the camera position of the center of the camera to the end of the road in the y-axis direction.
  • parameters are set separately for the road width on the right side and the road width on the left side of the virtual lidar, and the parameters are set so that the position of the autonomous moving body 9 on the road can be estimated.
  • the formula expressing the shape of a straight road is, for example, the following formula (8).
  • the first distance equation is formulated using equation (8).
  • the idea for derivation is as follows. That is, by assuming a scene in which a signal is transmitted from the center of the virtual lidar toward the end of the road, and then assuming a process until the transmitted signal reaches the end of the road, from the center of the virtual lidar. Formulate the distance to the intersection of the signal and the end of the road.
  • FIG. 19 is a diagram showing an example of the propagation of a signal transmitted from the center of the virtual lidar when the shape of the road in the embodiment is a straight line.
  • FIG. 19 shows an example of the order in which the measurement by the signal of Virtual Lidar is performed. Specifically, FIG. 19 is performed clockwise at equal intervals of 360 degrees with the ⁇ y axis direction (that is, the negative direction in the y-axis direction) as 0 degree. The interval is arbitrary. Point P1, point P2, point P3, and point P4 in FIG. 19 indicate vertices of an approximate shape, respectively.
  • Angle th1, angle th2, angle th3, and angle th4 are the angles formed by the -y axis and the line connecting the points P1, P2, P3, and P4 in FIG. 19 and the center of the virtual lidar. be.
  • the unit of angle th and angle is radian.
  • the second specific example is a specific example of an equation (hereinafter referred to as "second distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the shape of the road is a curve.
  • the second distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
  • the shape of the road is the slope of the straight line to the curve ⁇ angle, the length of the road to the entrance of the curve ⁇ D1, and the distance to the left road ⁇ l. width, distance to the right road ⁇ r. It is formulated using the parameters of width and road width ⁇ wise2 at the end of the curve. The extraction process estimates the values of these parameters.
  • the curve is formulated as an elliptical shape.
  • the width of the ellipse is formulated using the road width ⁇ width in front, and the vertical width of the ellipse is formulated using the road width ⁇ width 2 at the end of the curve. That is, the shape of the road is expressed by the following equation (21).
  • FIG. 20 is an explanatory diagram illustrating parameters used for formulating the shape of a road whose shape is a curve in the embodiment.
  • a process of rotating the entire shape with the center of the VisualLidar as the origin may be executed.
  • Equation (22) represents a right curve
  • equation (23) represents a left curve
  • FIG. 21 is a first diagram showing an example of auxiliary points used for formulating the second distance equation in the embodiment.
  • FIG. 21 shows the shape of the left curve.
  • the points P1, P2, and P3 in FIG. 21 are auxiliary points used for formulating the second distance equation.
  • Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
  • FIG. 22 is a second diagram showing an example of auxiliary points used for formulating the second distance equation in the embodiment.
  • FIG. 22 shows the shape of the right curve.
  • the points P1, P2, and P3 in FIG. 22 are auxiliary points used for formulating the second distance equation.
  • Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
  • the model (a set of mathematical formulas representing the shape) is switched before and after the angle th1, the angle th2, and the angle th3, and the straight line, the first straight line of the ellipse, and the orthogonal straight line are three.
  • the curve is expressed using a mathematical formula that expresses the shape.
  • the set of the following equations (24) to (40) is an example of the second distance equation.
  • the unit of angle th and angle is radian.
  • Equation (36) is an equation that holds when the road ahead of the curve goes straight to the first road. Further, when the center of the ellipse is xc and yc and the intersection of the virtual lidar signal and the ellipse is defined as x_e and y_e, x_eth and y_eth can be obtained from the simultaneous equations of the equation (37). As a result, the value on the left side of the equation (38) is acquired. Equations (39) and (40) represent operations performed in the extraction process when angle ⁇ 0. More specifically, all VLScurve. th, VLSsecond_road. Represents the operation executed after the calculation of th.
  • the third specific example is a specific example of an equation (hereinafter referred to as "third distance equation") expressing the distance from the center of the virtual lidar to the boundary in the VLS plane when the road is an intersection.
  • the third distance equation when the center of the Virtual Lidar is located at the origin of the VLS plane is an example of the reference region expression function.
  • the shape of the T-shaped road is expressed by an equation whose parameter is the distance ⁇ D1 to the intersection, in addition to the parameters used to express the curve. Therefore, the shape of the T-shaped road is expressed by the following equation (41).
  • FIG. 23 is a diagram showing an example of the shape of the T-shaped road in the embodiment.
  • FIG. 23 shows a road that goes from the bottom of the screen to the top (that is, in the positive direction of the y-axis), has an intersection, and branches into a road that goes to the left and a road that goes up.
  • the following formula (42) represents the shape of the right-shaped road.
  • the following formula (43) represents the shape of the left T-shaped road.
  • FIG. 24 is a diagram showing an example of auxiliary points used for formulating the third distance equation in the embodiment.
  • FIG. 24 shows the shape of a left T-shaped road.
  • the points P1, P2, and P3 in FIG. 24 are auxiliary points used for formulating the third distance equation.
  • Angle th1, angle th2, and angle th3 each represent an angle formed by the -y axis and the line connecting each of the points P1 to P3 and the center of the virtual lidar.
  • Formulas are switched before and after angle th1, angle th2, angle th3, and angle th4, and there are two formulas that represent the shape of the intersection: a straight line and a straight line that is orthogonal to the first straight line. Expressed using an equation.
  • the following formula (45) is an example of the third distance formula at an intersection.
  • the following equation (46) is an example of the third distance equation in a right-to-junction.
  • the following formula (47) is an example of the third distance formula in the left T-junction.
  • the following formula (48) is an example of the third distance formula at the junction.
  • the classification of a straight line and a curve is a process of classifying a road for an observation result by Visual Lidar, specifically, using an equation of a straight line, a right curve, and a left curve.
  • the classification of a straight line and a curve is, more specifically, a process of determining whether the target road is a straight line or a curve.
  • 25 and 27 show an example of the bird's-eye view image and the distance from the center of the virtual lidar in the VLS plane to the boundary.
  • 26 and 28 show the results estimated by the equations of the straight line, the right curve, and the left curve.
  • FIG. 25 is a first explanatory diagram illustrating an example of the result of classification in the embodiment.
  • FIG. 25 shows that the road is a straight road.
  • FIG. 26 is a second explanatory diagram illustrating an example of the result of classification in the embodiment. More specifically, FIG. 26 shows the results of classification for the road shown in FIG. 25.
  • “straight” indicates the result of estimation by the equation of the straight line
  • “Right curve” indicates the result of estimation by the equation of the right curve
  • “Left curve” indicates the result of estimation by the equation of the left curve. show.
  • FIG. 26 shows that the linear equation is selected, and the degree of agreement between the estimation result and the observation result is high when the linear equation is used. Therefore, FIG. 26, together with the result of FIG. 25, shows that the shape of the road was estimated with high accuracy.
  • FIG. 27 is a third explanatory diagram illustrating an example of the result of classification in the embodiment.
  • FIG. 27 shows that the road is a right curve.
  • FIG. 28 is a fourth explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 28 shows the results of classification for the road shown in FIG. 27.
  • “straight” indicates the result of estimation by the equation of the straight line
  • “Right curve” indicates the result of estimation by the equation of the right curve
  • “Left curve” indicates the result of estimation by the equation of the left curve. show.
  • FIG. 28 shows that the equation of the right curve is selected, and the degree of agreement between the estimation result and the observation result is high when the equation of the right curve is used. Therefore, FIG. 28, together with the result of FIG. 27, shows that the shape of the road was estimated with high accuracy.
  • the classification of intersections is a process of classifying roads for observation results by Visual Lidar using the formulas of straight lines, right-handed characters, left-handed characters, and junctions. More specifically, the classification of intersections is a process of determining whether the target road is a straight line, a right-handed character, a left-handed character, or a junction. 29, 31, and 33 show an example of the bird's-eye view image and the distance from the center of the virtual lidar in the VLS plane to the boundary. 30, FIG. 32, and FIG. 34 show the results estimated by the equations of a straight line, a right-handed character, a left-handed character, and a junction.
  • FIG. 29 is a fifth explanatory diagram illustrating an example of the results of classification in the embodiment.
  • FIG. 29 shows that the shape of the road is a right-handed shape.
  • FIG. 30 is a seventh explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 30 shows the results of classification for the road shown in FIG. 29.
  • "straight” indicates the result of estimation by the straight line formula
  • "Left insec” indicates the result of estimation by the left T-shaped formula
  • "T insec” indicates the result of estimation by the junction formula.
  • "Right insec” indicates the result of estimation by the right-to-character formula.
  • FIG. 30 shows that the right-to-character formula is selected, and the degree of agreement between the estimation result and the observation result is high when the right-to-character formula is used. Therefore, FIG. 30 together with the result of FIG. 29 shows that the shape of the road was estimated with high accuracy.
  • FIG. 31 is an eighth explanatory diagram illustrating an example of the results of classification in the embodiment.
  • FIG. 31 shows that the shape of the road is a left T-shape.
  • FIG. 32 is a ninth explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 32 shows the results of classification for the road shown in FIG.
  • “straight” indicates the result of estimation by the linear equation
  • “Left insec” indicates the estimation result by the left-character equation
  • “T insec” indicates the estimation result by the junction equation
  • “Right insec” indicates the result of estimation by the right-to-character formula.
  • FIG. 32 shows that the left T-shaped formula is selected, and the degree of agreement between the estimation result and the observation result is high when the left T-shaped formula is used. Therefore, FIG. 32, together with the result of FIG. 31, shows that the shape of the road was estimated with high accuracy.
  • FIG. 33 is a tenth explanatory diagram illustrating an example of the result of classification in the embodiment.
  • FIG. 33 shows that the shape of the road is the shape of a junction.
  • FIG. 34 is an eleventh explanatory diagram illustrating an example of the results of classification in the embodiment. More specifically, FIG. 34 shows the results of classification for the road shown in FIG. 33.
  • “straight” indicates the result of estimation by the linear equation
  • “Left insec” indicates the estimation result by the left-character equation
  • “T insec” indicates the estimation result by the junction equation
  • “Right insec” indicates the result of estimation by the right-to-character formula.
  • FIG. 34 shows that the equation of the junction is selected, and the degree of agreement between the estimation result and the observation result is high when the equation of the junction is used. Therefore, FIG. 34, together with the result of FIG. 33, shows that the shape of the road was estimated with high accuracy.
  • the horizontal axis of the graph G1 of FIG. 26, the graph G3 of FIG. 28, the graph G5 of FIG. 30, the graph G7 of FIG. 32, and the graph G9 of FIG. 34 represents the line-of-sight angle, and the vertical axis represents the distance. ..
  • the horizontal axis of the graph G2 of FIG. 26, the graph G4 of FIG. 28, the graph G6 of FIG. 30, the graph G8 of FIG. 32, and the graph G10 of FIG. Represents the coordinate value of the y-axis of the VLS plane.
  • the autonomous mobile control device 1 of the embodiment configured in this way determines the condition for minimizing the error by using the reference area information based on the area boundary distance information, and acquires the target information from the determined condition. Therefore, the autonomous mobile body control device 1 configured in this way can improve the accuracy of the movement of the autonomous mobile body 9.
  • the road area may not be correctly grasped.
  • the region may be estimated by performing fitting (that is, error minimization processing) that ignores dynamic obstacles.
  • FIG. 35 shows the execution result of fitting (specifically, error minimization processing) in which the dynamic obstacle is ignored by the autonomous mobile control device 1 when a part of the subject in the modified example is a dynamic obstacle. It is a figure which shows an example.
  • the horizontal axis of FIG. 35 represents the line-of-sight angle, and the vertical axis of FIG. 35 represents the distance.
  • “Deleted data” in FIG. 35 is an example of the measurement result of Visual Lidar for a dynamic obstacle.
  • Truste data” in FIG. 35 is data of a subject that is not a dynamic obstacle. That is, unlike "deleted data", the data is not ignored in the error minimization process.
  • “Statized curve” in FIG. 35 is an example of the result of fitting (that is, error minimization processing) using only the result of “true data” while ignoring the dynamic obstacle.
  • FIG. 35 shows that the region is appropriately estimated by the autonomous mobile controller 1 even when the dynamic obstacle is ignored.
  • the reference area information using a plurality of reference area expression functions is used.
  • FIG. 36 is a flowchart showing an example of the flow of processing executed by the autonomous mobile control device 1 when a part of the subject in the modified example is a dynamic obstacle.
  • the progress status information acquisition unit acquires the progress status information (step S301).
  • the area boundary distance information acquisition unit 102 acquires the image to be processed (step S302).
  • the area boundary distance information acquisition unit 102 uses a segmentation model, which is a trained model previously recorded in the storage unit 13 and is a trained model for determining which of the predetermined categories the pixels belong to. Read from the storage unit 13 (step S303).
  • Predetermined categories include at least dynamic obstacles.
  • the area boundary distance information acquisition unit 102 acquires the pixel values of the pixels centered on the target pixels, which are the pixels of the image to be processed and are selected according to a predetermined rule (step S304). Next, the area boundary distance information acquisition unit 102 determines the category to which the target pixel belongs by using the segmentation model (step S305). Next, the category to which the target pixel belongs is recorded in the storage unit 13 (step S306). When it is determined in step S305 that the category belonging to the target pixel is a dynamic obstacle, the storage unit 13 records the dynamic obstacle as the category to which the target pixel belongs. When a category other than the dynamic obstacle (hereinafter referred to as "category A”) is determined in step S305 as the category to which the target pixel belongs, the storage unit 13 has the category A as the category to which the target pixel belongs. Recorded.
  • step S306 the area boundary distance information acquisition unit 102 determines whether or not the category has been determined for all the pixels (step S307). When there is a pixel for which the category has not been determined yet (step S307: NO), the area boundary distance information acquisition unit 102 selects the next target pixel according to a predetermined rule (step S308). The next target pixel is, for example, a pixel next to the current target pixel. After step S308, the process returns to step S304.
  • step S307 when the category is determined for all the pixels (step S307: YES), the area boundary distance information acquisition unit 102 executes the boundary pixel information acquisition process (step S309).
  • the area boundary distance information acquisition unit 102 acquires the values of the pixels other than the pixels whose category to which the processing of step S305 belongs is determined to be a dynamic obstacle among the pixels of the image to be processed (step S310). ..
  • step S311 the area boundary distance information acquisition unit 102 executes the distance mapping process using the value acquired in step S310 (step S311). Therefore, in the process of step S311, the value of the pixel whose category is determined to be a dynamic obstacle by the process of step S305 is not used.
  • step S312 the area boundary distance information acquisition unit 102 executes a virtual space distance measurement process using the result of step S311 (step S312). By executing the process of step S312, the area boundary distance information is obtained.
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information after deleting the information of the dynamic obstacle. Deleting the information of the dynamic obstacle means not using the value of the pixel determined to belong to the dynamic obstacle, and specifically means the processing of step S310.
  • the reference area information acquisition unit 103 acquires the reference area information stored in the storage unit 13 (step S313). More specifically, the reference area information acquisition unit 103 reads out one or more reference area expression functions stored in the storage unit 13. Next, the target information acquisition unit 104 executes an error minimization process using the reference area information and the area boundary distance information acquired in step S312 (step S314).
  • the error minimization process is a process of determining a condition for minimizing an error, which is a difference between the reference area information obtained in step S313 and the area boundary distance information obtained in step S312.
  • the target information acquisition unit 104 executes the target information acquisition process (step S315).
  • the control signal generation unit 105 generates a control signal for controlling the operation of the autonomous mobile body 9 based on the target information, and controls the operation of the autonomous mobile body 9 by the generated control signal (step S316).
  • the error minimization process executed in the process of step S314 is a fitting that ignores the dynamic obstacles described in FIG. 35 and its description.
  • the fitting that ignores the dynamic obstacle means the fitting that does not use the data indicating the distance to the dynamic obstacle in the area boundary distance information.
  • the target information acquisition unit 104 acquires target information by executing an error minimization process for determining a condition for minimizing an error, which is a difference between the reference area information and the area boundary distance information. ..
  • Reference area information is used in the error minimization process.
  • the reference area information is information using one or more reference area expression functions which are functions representing the position, orientation, and shape of the target area and have one or a plurality of parameters.
  • the area boundary distance information is information indicating the distance from the autonomous mobile body 9 to each position on the boundary of the target area.
  • the area boundary distance information acquisition unit 102 acquires the area boundary distance information after deleting the information of the dynamic obstacle
  • the target information acquisition unit 104 acquires the reference area information and the area boundary.
  • the target information is acquired by executing the error minimization process for determining the condition for minimizing the error, which is the difference from the distance information.
  • FIG. 37 is a flowchart showing an example of the flow of generating the segmentation model in the modified example. Before explaining the flowchart, an outline of generating a segmentation model will be given.
  • the segmentation model is a mathematical model prepared in advance and is an updated mathematical model (hereinafter referred to as "learning stage model") that estimates the category to which each pixel of the image belongs based on the input image by a machine learning method.
  • learning stage model an updated mathematical model that estimates the category to which each pixel of the image belongs based on the input image by a machine learning method.
  • the resulting trained mathematical model is a mathematical model prepared in advance and is an updated mathematical model (hereinafter referred to as "learning stage model") that estimates the category to which each pixel of the image belongs based on the input image by a machine learning method.
  • a mathematical model is a set that includes one or more processes in which the conditions and order of execution (hereinafter referred to as "execution rules") are predetermined. For the sake of simplicity of the explanation below, updating a mathematical model by a machine learning method is called learning. Further, updating the mathematical model means appropriately adjusting the values of the parameters included in the mathematical model. Further, the execution of the mathematical model means that each process included in the mathematical model is executed according to the execution rule.
  • the learning stage model may be configured in any way as long as it is a mathematical model updated by a machine learning method.
  • the learning stage model is composed of, for example, a neural network.
  • the learning stage model may be composed of a neural network including, for example, a convolutional neural network.
  • the learning stage model may be composed of a neural network including, for example, an autoencoder.
  • the training sample used for learning the learning stage model is the paired data of the image and the annotation indicating the category to which each pixel of the image belongs.
  • the loss function used to update the learning stage model is a function whose value indicates the difference between the annotation and the category of each pixel estimated based on the input image.
  • Annotations are, for example, data expressed in tensors.
  • Updating the training stage model means updating the values of the parameters included in the training stage model according to a predetermined rule so as to reduce the value of the loss function.
  • the training sample is input to the learning stage model (step S401).
  • the category is estimated for each pixel of the image including the input training sample (step S402).
  • step S403 the value of the parameter included in the learning stage model is updated so as to reduce the value of the loss function.
  • the value of the parameter included in the learning stage model is updated, it means that the learning stage model is updated.
  • step S404 it is determined whether or not a predetermined end condition (hereinafter referred to as “learning end condition”) is satisfied (step S404).
  • the learning end condition is, for example, a condition that a predetermined number of updates have been performed.
  • step S404 When the learning end condition is satisfied (step S404: YES), the learning stage model is recorded in the storage unit 13 as a segmentation model (step S405). On the other hand, if the learning end condition is not satisfied (step S404: NO), the process returns to the process of step S402. Depending on the learning algorithm, the process returns to the process of step S401, and a new training sample is input to the learning stage model.
  • the experiment was aimed at estimating parameters on a straight road. Specifically, there were three parameters: the slope of the road, the width of the road on the right, and the width of the road on the left.
  • the experiment three experiments from the first experiment to the third experiment under different conditions were carried out.
  • the first experiment was an outdoor experiment using a monocular camera as a photographing device 902.
  • the experiment was conducted at two places, the first outdoor and the second outdoor.
  • the second experiment was an indoor experiment using a monocular camera as the photographing apparatus 902.
  • the experiment was conducted in two places, the first indoor and the second indoor.
  • the third experiment was an indoor experiment using 2DLiDAR (2dimensional Light Detection And Ringing) as the photographing apparatus 902.
  • the inclinations were -30 degrees, -20 degrees, -10 degrees, 0 degrees, 10 degrees, 20 degrees, and 30 degrees.
  • the road width was the left and right road width. Road width was measured during the experiment.
  • FIG. 38 is a diagram showing the experimental environment of the first experiment conducted outdoors in the modified example.
  • FIG. 38 shows a first outdoor photograph.
  • FIG. 39 is a first diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 39 shows the result of projecting the result of segmentation on the image of FIG. 38 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 40 is a second diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 40 shows that the shape of the road can be appropriately estimated using the observed values in the VLS plane.
  • FIG. 41 is a third diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 41 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 42 is a diagram showing the experimental environment of the first experiment conducted outdoors in the modified example.
  • FIG. 42 shows a second outdoor photograph.
  • FIG. 43 is a first diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 43 shows the result of projecting the result of segmentation on the image of FIG. 42 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 44 is a second diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 44 shows that the shape of the road can be appropriately estimated using the observed values in the VLS plane.
  • FIG. 45 is a third diagram showing the results of the first experiment conducted outdoors in the modified example.
  • FIG. 45 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 46 is a diagram showing the experimental environment of the second experiment conducted indoors in the first modified example.
  • FIG. 46 shows a first indoor photograph.
  • FIG. 47 is a first diagram showing the results of a second experiment conducted indoors in the first modified example.
  • FIG. 47 shows the result of projecting the result of segmentation on the image of FIG. 46 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 48 is a second diagram showing the results of the second experiment conducted indoors in the first modified example.
  • FIG. 48 shows that the shape of the road can be properly estimated using the observed values in the VLS plane.
  • FIG. 49 is a third diagram showing the results of the second experiment conducted indoors in the first modified example.
  • FIG. 49 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 50 is a diagram showing the experimental environment of the second experiment performed indoors in the modified example.
  • FIG. 50 shows a second indoor photograph.
  • FIG. 51 is a first diagram showing the results of a second experiment conducted indoors in a modified example.
  • FIG. 51 shows the result of projecting the result of segmentation on the image of FIG. 50 onto a bird's-eye view, and the visual field boundary distance.
  • FIG. 52 is a second diagram showing the results of a second experiment conducted indoors in the second modified example.
  • FIG. 52 shows that the shape of the road can be appropriately estimated by using the observed values in the VLS plane.
  • FIG. 53 is a third diagram showing the results of the second experiment conducted indoors in the second modified example.
  • FIG. 53 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values.
  • FIG. 54 is a diagram showing the experimental environment of the third experiment in the modified example.
  • FIG. 54 shows a photograph of the place where the third experiment was performed.
  • 2DLiDAR was used as the photographing apparatus 902.
  • FIG. 55 is the first diagram showing the results of the third experiment in the modified example.
  • FIG. 55 shows that the shape of the road can be properly estimated using the observed values in the VLS plane. This is because there is a wall at the boundary of the road area.
  • FIG. 56 is a second diagram showing the results of the third experiment in the modified example.
  • FIG. 56 shows in a bird's-eye view that the shape of the road can be appropriately estimated using the observed values. This is because there is a wall at the boundary of the road area.
  • FIG. 57 is a first diagram showing the accuracy of the measurement results of the inclination and the road width obtained based on the experimental results of the first experiment to the second experiment.
  • FIG. 57 shows an average error of 7.08 degrees for the slope, an average error of 0.670 meters for the left road width, and an average of 0.634 meters for the right road width in the case of an outdoor experiment. Indicates that there was an error.
  • FIG. 57 shows an indoor experiment with an average error of 6.41 degrees for tilt, an average error of 0.363 meters for the left road width, and an average of 0.356 meters for the right road width. Indicates that there was an error.
  • FIG. 58 is a second diagram showing the accuracy of the measurement results of the inclination and the road width obtained based on the experimental results of the first experiment to the second experiment.
  • FIG. 58 is a result of standardizing the result of FIG. 57.
  • the inclination of the standard used for standardization was -60 to 60 degrees, and the standard road width was 4.0 meters in the outdoor experiment and 1.92 meters in the indoor experiment. ..
  • FIG. 58 shows an outdoor experiment with a 5.9 percent error rate for tilt, a 17.6 percent error rate for the left road width, and a 16.8 percent error rate for the right road width. Indicates that it was a percentage.
  • FIG. 58 shows an indoor experiment with an error rate of 5.34 percent for tilt, a 9.57 percent error rate for the left road width, and a 9.47 percent error rate for the right road width. Indicates that it was a percentage.
  • FIG. 59 is a diagram showing the experimental environment of the control experiment.
  • 2DLiDAR was used as the imaging device 902.
  • FIG. 59 is an image of a photograph showing the experimental environment of the control experiment. As shown in FIG. 59, the control experiment was performed in an outdoor environment similar to the first experiment performed outdoors.
  • FIG. 60 is a diagram showing an example of the experimental results of the control experiment.
  • the horizontal axis of the figure indicates the viewing angle [°], and the vertical axis of the figure indicates the distance.
  • the graph in the figure is the result of measurement by 2DLiDAR in the control experiment. The measurement result in the area where no reflector was present was recorded as 0 meters.
  • the area boundary distance information does not necessarily have to be acquired from the photographing apparatus 902.
  • the area boundary distance information may be acquired from an information processing device that is communicably connected via a network such as a management device such as a server on the network.
  • the image to be processed does not necessarily have to be acquired from the photographing apparatus 902.
  • the image to be processed may be acquired from an information processing device that is communicably connected via a network such as a management device such as a server on the network.
  • the autonomous mobile control device 1 may be mounted by using a plurality of information processing devices that are communicably connected via a network.
  • each functional unit included in the autonomous mobile control device 1 may be distributed and mounted in a plurality of information processing devices.
  • the program may be recorded on a computer-readable recording medium.
  • the computer-readable recording medium is, for example, a flexible disk, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, or a storage device such as a hard disk built in a computer system.
  • the program may be transmitted over a telecommunication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
PCT/JP2021/035631 2020-09-30 2021-09-28 自律移動体制御装置、自律移動体制御方法及びプログラム WO2022071315A1 (ja)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022554008A JPWO2022071315A1 (pt) 2020-09-30 2021-09-28

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-165960 2020-09-30
JP2020165960 2020-09-30

Publications (1)

Publication Number Publication Date
WO2022071315A1 true WO2022071315A1 (ja) 2022-04-07

Family

ID=80950378

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/035631 WO2022071315A1 (ja) 2020-09-30 2021-09-28 自律移動体制御装置、自律移動体制御方法及びプログラム

Country Status (2)

Country Link
JP (1) JPWO2022071315A1 (pt)
WO (1) WO2022071315A1 (pt)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003170760A (ja) * 2001-12-07 2003-06-17 Hitachi Ltd 車両用走行制御装置及び地図情報データ記録媒体
JP2005010891A (ja) * 2003-06-17 2005-01-13 Nissan Motor Co Ltd 車両用道路形状認識装置
JP2005332204A (ja) * 2004-05-20 2005-12-02 Univ Waseda 移動制御装置、環境認識装置及び移動体制御用プログラム
JP2009259215A (ja) * 2008-03-18 2009-11-05 Zenrin Co Ltd 路面標示地図生成方法
JP2012008999A (ja) * 2010-05-26 2012-01-12 Mitsubishi Electric Corp 道路形状推定装置及びコンピュータプログラム及び道路形状推定方法
JP2016224593A (ja) * 2015-05-28 2016-12-28 アイシン・エィ・ダブリュ株式会社 道路形状検出システム、道路形状検出方法及びコンピュータプログラム
WO2017056247A1 (ja) * 2015-09-30 2017-04-06 日産自動車株式会社 走行制御方法および走行制御装置
JP2018200501A (ja) * 2017-05-25 2018-12-20 日産自動車株式会社 車線情報出力方法および車線情報出力装置
JP2019078562A (ja) * 2017-10-20 2019-05-23 トヨタ自動車株式会社 自車位置推定装置
WO2020075861A1 (ja) * 2018-10-12 2020-04-16 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置
JP2020076580A (ja) * 2018-11-05 2020-05-21 トヨタ自動車株式会社 軸ずれ推定装置

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003170760A (ja) * 2001-12-07 2003-06-17 Hitachi Ltd 車両用走行制御装置及び地図情報データ記録媒体
JP2005010891A (ja) * 2003-06-17 2005-01-13 Nissan Motor Co Ltd 車両用道路形状認識装置
JP2005332204A (ja) * 2004-05-20 2005-12-02 Univ Waseda 移動制御装置、環境認識装置及び移動体制御用プログラム
JP2009259215A (ja) * 2008-03-18 2009-11-05 Zenrin Co Ltd 路面標示地図生成方法
JP2012008999A (ja) * 2010-05-26 2012-01-12 Mitsubishi Electric Corp 道路形状推定装置及びコンピュータプログラム及び道路形状推定方法
JP2016224593A (ja) * 2015-05-28 2016-12-28 アイシン・エィ・ダブリュ株式会社 道路形状検出システム、道路形状検出方法及びコンピュータプログラム
WO2017056247A1 (ja) * 2015-09-30 2017-04-06 日産自動車株式会社 走行制御方法および走行制御装置
JP2018200501A (ja) * 2017-05-25 2018-12-20 日産自動車株式会社 車線情報出力方法および車線情報出力装置
JP2019078562A (ja) * 2017-10-20 2019-05-23 トヨタ自動車株式会社 自車位置推定装置
WO2020075861A1 (ja) * 2018-10-12 2020-04-16 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元データ符号化方法、三次元データ復号方法、三次元データ符号化装置、及び三次元データ復号装置
JP2020076580A (ja) * 2018-11-05 2020-05-21 トヨタ自動車株式会社 軸ずれ推定装置

Also Published As

Publication number Publication date
JPWO2022071315A1 (pt) 2022-04-07

Similar Documents

Publication Publication Date Title
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN111486855B (zh) 一种具有物体导航点的室内二维语义栅格地图构建方法
Zhang et al. Low-drift and real-time lidar odometry and mapping
WO2021022615A1 (zh) 机器人探索路径生成方法、计算机设备和存储介质
CN112525202A (zh) 一种基于多传感器融合的slam定位导航方法及系统
JP2020030204A (ja) 距離測定方法、プログラム、距離測定システム、および可動物体
CN111492403A (zh) 用于生成高清晰度地图的激光雷达到相机校准
US8896660B2 (en) Method and apparatus for computing error-bounded position and orientation of panoramic cameras in real-world environments
CN107491070A (zh) 一种移动机器人路径规划方法及装置
Xu et al. SLAM of Robot based on the Fusion of Vision and LIDAR
Xiao et al. 3D point cloud registration based on planar surfaces
CN110260866A (zh) 一种基于视觉传感器的机器人定位与避障方法
CN116349222B (zh) 利用集成图像帧渲染基于深度的三维模型
Kim et al. As-is geometric data collection and 3D visualization through the collaboration between UAV and UGV
WO2022127572A9 (zh) 机器人三维地图位姿显示方法、装置、设备及存储介质
KR101319526B1 (ko) 이동 로봇을 이용하여 목표물의 위치 정보를 제공하기 위한 방법
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
WO2022071315A1 (ja) 自律移動体制御装置、自律移動体制御方法及びプログラム
JP6603993B2 (ja) 画像処理装置、画像処理方法、画像処理システム、及びプログラム
Martinez et al. Map-based lane identification and prediction for autonomous vehicles
Sharma et al. Image Acquisition for High Quality Architectural Reconstruction.
WO2022172831A1 (ja) 情報処理装置
Arukgoda Vector Distance Transform Maps for Autonomous Mobile Robot Navigation
KR20230017088A (ko) 영상 좌표의 불확실성을 추정하는 장치 및 방법
CN115200601A (zh) 一种导航方法、装置、轮式机器人及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21875616

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022554008

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21875616

Country of ref document: EP

Kind code of ref document: A1