WO2021014846A1 - 情報処理装置、データ生成方法、及びプログラムが格納された非一時的なコンピュータ可読媒体 - Google Patents
情報処理装置、データ生成方法、及びプログラムが格納された非一時的なコンピュータ可読媒体 Download PDFInfo
- Publication number
- WO2021014846A1 WO2021014846A1 PCT/JP2020/024062 JP2020024062W WO2021014846A1 WO 2021014846 A1 WO2021014846 A1 WO 2021014846A1 JP 2020024062 W JP2020024062 W JP 2020024062W WO 2021014846 A1 WO2021014846 A1 WO 2021014846A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- estimation
- learning
- likelihood
- point cloud
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- This disclosure relates to an information processing device, a data generation method, and a program.
- a three-dimensional LIDAR (Light Detection and Ringing) sensor is a sensor used to acquire information on the surrounding environment, including its shape.
- the three-dimensional LIDAR sensor is used, for example, for automatic driving control or robot control. In automatic driving control, the three-dimensional LIDAR sensor is used to acquire information on obstacles or road surfaces around the vehicle.
- Patent Document 1 discloses that the distance to an object such as another vehicle or a pedestrian is measured by using a LIDAR sensor mounted on the vehicle.
- the three-dimensional LIDAR sensor including the LIDAR sensor disclosed in Patent Document 1 measures the distance to the irradiated object by detecting the reflected light of the light radiated to the surroundings. Further, the three-dimensional LIDAR sensor acquires the shape of a surrounding object, the surrounding environment information, and the like by collectively holding the distance information for each measurement point as point cloud data. However, the three-dimensional LIDAR sensor may not be able to normally detect the reflected light of the irradiated light due to the property of utilizing the reflection of light.
- the three-dimensional LIDAR sensor receives the light.
- the intensity of reflected light weakens.
- the point cloud data acquired by the three-dimensional LIDAR sensor contains unreliable data due to a defect or the like.
- An object of the present disclosure is to provide an information processing device, a data generation method, and a program capable of determining the accuracy of data acquired by using a LIDAR sensor.
- the information processing apparatus is included in the image sensor, the learning image data, and the three-dimensional point cloud data with a correct answer in substantially the same area as the area included in the learning image data.
- the estimation imaging data is obtained from the estimation imaging data acquired by the imaging sensor using the learning unit that learns the likelihood of the point distance as training data and generates the trained model and the trained model. It is provided with an estimation unit that generates estimation data including the likelihood of the distance of points included in the estimation three-dimensional point cloud data determined based on.
- the data generation method is the distance between the learning imaging data and the points included in the correct-answered three-dimensional point group data in a region substantially the same as the region included in the learning imaging data.
- Estimate 3D determined based on the estimation imaging data from the estimation imaging data acquired by the imaging sensor using the learned model by learning the likelihood as training data and generating a trained model.
- the likelihood of the distance between the learning imaging data and the points included in the correct-answered three-dimensional point group data in a region substantially the same as the region included in the learning imaging data is generated, and using the trained model, a group of three-dimensional estimation points determined based on the estimation imaging data from the estimation imaging data acquired by the imaging sensor. Have the computer perform an estimate of the data, including the likelihood of the distance between the points contained in the data.
- an information processing device a data generation method, and a program capable of determining the accuracy of data acquired by using a LIDAR sensor.
- FIG. It is a block diagram of the information processing apparatus which concerns on Embodiment 1.
- FIG. It is a block diagram of the information processing apparatus which concerns on Embodiment 2.
- FIG. It is a figure explaining the outline of the learning process which concerns on Embodiment 2.
- FIG. It is a figure explaining the outline of the integrated process which concerns on Embodiment 2.
- FIG. It is a figure which shows the flow of the generation processing of the 3D point cloud data with reliability which concerns on Embodiment 2.
- FIG. It is a figure explaining the outline of the learning process which concerns on Embodiment 3.
- FIG. It is a figure explaining the outline of the integrated process which concerns on Embodiment 3.
- FIG. It is a figure explaining the details of the integrated process which concerns on Embodiment 3.
- FIG. It is a block diagram of the information processing apparatus which concerns on each embodiment.
- the information processing device 10 may be a computer device that operates by the processor executing a program stored in the memory.
- the information processing device 10 has an image sensor 11, a learning unit 12, and an estimation unit 13.
- the image sensor 11 generates image data of an object to be photographed or a region to be photographed.
- the captured data may be paraphrased as image data.
- the image pickup sensor 11 may be, for example, a sensor that acquires image data such as a visible light camera, a depth camera, an infrared camera, and a multispectral camera. Further, the image sensor 11 may be configured by using a single camera or a plurality of cameras.
- the image sensor 11 may be paraphrased as, for example, an image sensor, an image sensor, an image sensor, or the like.
- the learning unit 12 and the estimation unit 13 may be software or modules whose processing is executed by the processor executing a program stored in the memory.
- the learning unit 12 and the estimation unit 13 may be hardware such as a circuit or a chip.
- the image sensor 11 generates learning image data and estimation image data.
- the learning imaging data is data used as input data or learning data of a model used for machine learning.
- the estimation imaging data is used to estimate the likelihood of the distance of the estimation three-dimensional point cloud data corresponding to a specific region in the estimation imaging data.
- the estimation 3D point cloud data is 3D point cloud data associated with a region or a pixel included in the estimation imaging data. That is, the estimation 3D point cloud data is 3D point cloud data determined based on the region or pixel included in the estimation imaging data.
- the learning image data and the estimation image data are, for example, image data including an object, a landscape, and the like.
- the learning unit 12 learns the learning imaging data and the likelihood of the distance between the points included in the three-dimensional point cloud data with the correct answer in the region substantially the same as the region included in the learning imaging data as learning data. Generate a trained model.
- the 3D point cloud data with the correct answer of the region substantially the same as the region included in the learning imaging data is generated by using a sensor different from the imaging sensor to generate information in the same region as the region captured by the imaging sensor. It may be data in which the likelihood of each point is given to the three-dimensional point cloud data.
- the sensor different from the image sensor may be, for example, a distance measuring sensor.
- the distance measuring sensor may be, for example, a LIDAR sensor or a three-dimensional LIDAR sensor.
- the three-dimensional point cloud data may be, for example, data indicating the distance from the distance measuring sensor to each point included in the three-dimensional point cloud data, the direction of each point with the distance measuring sensor as a base point, and the like. Good.
- the three-dimensional point cloud data with a correct answer in the area substantially the same as the area included in the learning imaging data includes, for example, a stationary object such as a real estate, a road, a plant, etc. included in the learning imaging data or a small amount of movement. Contains data for the same object as the object.
- the three-dimensional point cloud data with the correct answer has substantially the same region as the region included in the learning imaging data, and may be acquired at substantially the same timing as the learning imaging data.
- the 3D point cloud data with the correct answer includes the data of the same object as the moving object such as a person, a car, etc., which is included in the learning imaging data, in addition to the non-moving object or the object with little movement. May be good.
- the three-dimensional point cloud data with the correct answer may be generated using, for example, a distance measuring sensor built in or attached to the information processing device 10.
- the three-dimensional point cloud data with a correct answer may be data generated by a device different from the information processing device 10.
- the information processing device 10 may acquire data generated by a device different from the information processing device 10 via the network.
- the information processing device 10 may acquire data generated by a device different from the information processing device 10 via a recording medium or the like.
- the trained model is, for example, a model to which parameters determined by learning the imaging data for learning and the likelihood of the distance between each point included in the 3D point cloud data with a correct answer are applied. May be good. That is, the learning unit 12 determines the parameters of the model by learning the learning imaging data and the likelihood of the distance between the points included in the three-dimensional point cloud data with the correct answer.
- the learning may be, for example, machine learning, deep learning using a convolutional neural network, or the like.
- the estimation unit 13 uses the learning model generated by the learning unit 12 to estimate the distance likelihood of points included in the estimation 3D point cloud data from the estimation imaging data acquired by the imaging sensor 11. Generate data.
- the learning unit 12 uses the likelihood of the distance of each point included in the 3D point cloud data with the correct answer as the correct answer data. It is assumed that each point included in the 3D point cloud data with a correct answer is associated with a region or a pixel included in the learning imaging data.
- the estimation unit 13 inputs the estimation imaging data acquired by the imaging sensor 11 into the learning model generated by the learning unit 12, so that the estimation unit 13 determines the distance between the points included in the estimation three-dimensional point cloud data. Output the degree.
- the data output from the training model corresponds to the estimated data.
- the information processing apparatus 10 can generate estimation data including the likelihood of the distance between the region or the pixel associated with the estimation imaging data acquired by the imaging sensor 11.
- estimation data including the likelihood of the distance between the region or the pixel associated with the estimation imaging data acquired by the imaging sensor 11.
- the configuration in which the information processing device 10 includes the image sensor 11 and the learning unit 12 has been described, but at least one of the image sensor 11 and the learning unit 12 is a device different from the information processing device 10. It may be provided.
- the image sensor 11 may be provided in a car or the like.
- the data acquired by the image pickup sensor 11 provided in a car or the like may be recorded in the recording device in the information processing device 10 or may be stored in a device different from the information processing device 10.
- the recording device may be, for example, an SSD (Solid State Drive) or an HDD (Hard Disk Drive).
- a device including the learning unit 12, which is different from the information processing device 10 performs learning using the data recorded in the recording device and generates a learned model. May be good.
- the information processing device 10 can perform desired information processing by using the trained model generated by the learning device.
- the processing load of the information processing device 10 can be reduced.
- the information processing device 20 includes an image sensor 11, a learning unit 12, an estimation unit 13, a LIDAR sensor 21, and an integrated unit 22.
- the image sensor 11, the learning unit 12, and the estimation unit 13 are the same as the image sensor 11, the learning unit 12, and the estimation unit 13 in FIG. 1, and detailed description thereof will be omitted.
- the component constituting the information processing device 20 may be software or a module whose processing is executed by the processor executing a program stored in the memory. Alternatively, the component may be hardware such as a circuit or a chip.
- the LIDAR sensor 21 acquires 3D point cloud data for learning and 3D point cloud data for measurement. Acquiring may be paraphrased as measuring, collecting, generating, and the like.
- the area of the learning 3D point cloud data includes the area of the image data generated in the learning imaging data.
- the LIDAR sensor 21 is attached to the information processing device 20 and is attached at a position where point cloud data including a region that can be photographed by the image pickup sensor 11 can be acquired.
- the lidar sensor 21 may be attached to the same object to which the image sensor 11 is attached.
- the object to which the image sensor 11 is attached may be, for example, a wall, a rod, a building, or the like.
- the LIDAR sensor 21 and the image sensor 11 may be attached to a device or place different from the information processing device 20.
- the LIDAR sensor 21 and the image sensor 11 may be connected to the information processing device 20 via a cable or the like.
- the LIDAR sensor 21 and the image sensor 11 may be connected to the information processing device 20 via a wireless line.
- the LIDAR sensor 21 outputs the acquired three-dimensional point cloud data for learning to the learning unit 12. Further, the LIDAR sensor 21 outputs the acquired measurement three-dimensional point cloud data to the integration unit 22.
- the estimation 3D point cloud data is the 3D point cloud data determined based on the region or pixel included in the estimation imaging data, whereas the measurement 3D point cloud data is actually obtained by using the LIDAR sensor 21. It is the three-dimensional point cloud data measured in.
- the learning process executed in the learning unit 12 will be described with reference to FIG.
- the learning unit 12 uses the learning imaging data and the three-dimensional point cloud data with the correct answer as learning data.
- the likelihood of each point of the three-dimensional point cloud data for learning acquired by the LIDAR sensor 21 is given as the correct answer data.
- the point likelihood may be, for example, the likelihood of the distance from the lidar sensor 21 to the object.
- a likelihood of 1 is set as correct answer data at a point where the distance can be measured.
- Likelihood 0 is set as correct data at points where the distance cannot be measured, or points where the distance measurement result is discontinuous or isolated compared to the measurement results of surrounding points. ..
- the discontinuous point or the isolated point may be, for example, a point where the difference from the distance indicated by the surrounding points is larger than a predetermined threshold value.
- the likelihood may be given a value between 0 and 1, depending on the accuracy or degree of inaccuracy of the likelihood. For example, a likelihood of 1 is set as correct data for a point where the distance can be measured, 0 is set for a point where the distance cannot be measured, and the distance measurement result is compared with the measurement results of surrounding points.
- Discontinuous or isolated points may be given a value between 0 and 1. In this case, the accuracy increases as the likelihood value approaches 0 to 1.
- the distance cannot be measured for example, when the light emitted from the LIDAR sensor 21 is totally reflected and the reflected light cannot be detected by the LIDAR sensor 21.
- a point where the distance cannot be measured is assumed to indicate, for example, a puddle. Further, it is assumed that the discontinuous point or the isolated point where the distance measurement result is compared with the measurement result of the surrounding points indicates the reflected light reflected by rain or snow.
- the likelihood of each point may be set visually by a human. For example, by visual inspection by a human being, the likelihood of a point corresponding to a position such as a puddle where total reflection is likely to occur is set to 0, and the likelihood of a point corresponding to a place where total reflection does not occur is set to 1. You may. Alternatively, the likelihood of each point may be set by matching the precise 3D structure information such as a dynamic map or map data with the 3D point cloud data with a correct answer.
- Point_1 to Point_N described in the 3D point cloud data with the correct answer in FIG. 3 indicate each point and are associated with the imaging data.
- the position of each pixel in the imaged data may be associated with each point in the three-dimensional point cloud data with the correct answer.
- the learning unit 12 determines the parameters of the model used to estimate the likelihood of each point of the estimation 3D point cloud data determined based on the region or pixel included in the estimation imaging data. In order to determine the parameters, the learning unit 12 performs learning using the learning imaging data and the three-dimensional point cloud data with the correct answer as learning data.
- a model whose parameters have been determined may be referred to as a trained model.
- the parameter may be a weighting coefficient or the like used in deep learning.
- the learning unit 12 outputs the trained model to the estimation unit 13.
- the estimation unit 13 may acquire a trained model from the learning unit 12 each time the estimation process is executed.
- the estimation unit 13 includes a point including the likelihood of the distance between each point of the estimation three-dimensional point cloud data determined based on the region or pixel included in the estimation imaging data acquired by the imaging sensor 11.
- Generate group likelihood estimation data In the point cloud likelihood estimation data, for example, as shown in the three-dimensional point cloud data with a correct answer in FIG. 3, each point may be associated with the likelihood of the estimated distance between the points.
- the integration unit 22 receives the measurement three-dimensional point cloud data acquired by the LIDAR sensor 21 and the point cloud likelihood estimation data generated by the estimation unit 13.
- the measurement three-dimensional point cloud data included in the measurement data of FIG. 4 is acquired by the LIDAR sensor 21, and the estimation imaging data is acquired by the imaging sensor 11.
- the integration unit 22 assigns the likelihood of each point shown in the point cloud likelihood estimation data to each point of the measurement three-dimensional point cloud data, and generates the three-dimensional point cloud data with reliability.
- the three-dimensional point cloud data with reliability may be, for example, point cloud data in which the likelihood is 0, that is, the points where the accuracy of the data is assumed to be low are clearly distinguished. Inaccurate data can be rephrased as unreliable data.
- points with low data accuracy may be surrounded by a figure such as a square.
- the three-dimensional point cloud data with reliability may be colored differently from the color of the point with high data accuracy as the color of the point with low data accuracy. That is, the three-dimensional point cloud data with reliability is generated so that a point having a likelihood lower than 1 and a point having a likelihood of 1 or more can be distinguished when the threshold value of the likelihood is 1. May be done.
- the three-dimensional point cloud data with reliability may be used as display data.
- the learning unit 12 assigns or sets the likelihood of distance as correct answer data to each point included in the learning three-dimensional point cloud data (S11).
- the learning unit 12 may acquire the correct answer 3D point cloud data to which the likelihood is given as the correct answer data from another functional block, another device different from the information processing device 20, or the like.
- the learning unit 12 performs learning using the learning imaging data and the three-dimensional point cloud data with the correct answer as input data (S12).
- the learning unit 12 learns to determine the parameters of the model used to estimate the likelihood of each point of the estimation 3D point cloud data determined based on the region or pixel included in the estimation imaging data. I do.
- the estimation unit 13 generates point cloud likelihood estimation data that estimates the likelihood of each point included in the point cloud data from the estimation imaging data using the trained model (S13). It is assumed that the point cloud data including the points at which the likelihood is estimated substantially coincides with the region indicated by the estimation imaging data.
- the integration unit 22 assigns a likelihood to each point included in the measured three-dimensional point cloud data by using the point cloud likelihood estimation data (S14).
- the integration unit 22 generates reliable three-dimensional point cloud data, for example, so as to clearly distinguish between highly accurate data and less accurate data.
- the information processing apparatus 20 estimates the likelihood of each point included in the estimation three-dimensional point cloud data determined based on substantially the same region as the estimation imaging data from the estimation imaging data. can do. From this, the information processing apparatus 20 can determine the accuracy of each point included in the measurement three-dimensional point cloud data acquired by the LIDAR sensor 21 by using the likelihood of each estimated point. A user or administrator who operates the information processing device 20 uses information on the accuracy or reliability of the measured three-dimensional point cloud data acquired by the LIDAR sensor 21 to correct data loss or unreliable data. It can be performed. As a result, the information processing device 20 can perform robust sensing against disturbances caused by particles flying into the air such as rain, snow, and dust.
- the accuracy of the 3D map, obstacle information, road surface information, etc. can be improved.
- the learning process according to the third embodiment will be described with reference to FIG. Also in the third embodiment, the process using the information processing device 20 of FIG. 2 is executed.
- the learning unit 12 performs image recognition learning and likelihood estimation learning.
- Image recognition learning shows that a model used for image recognition (hereinafter referred to as an image recognition trained model) is generated by using the learning imaging data and the learning labeled data.
- a model used for likelihood estimation (hereinafter referred to as a likelihood estimation trained model) is generated by using the data with a learning label and the three-dimensional point cloud data with a correct answer. It is shown that.
- the label may be, for example, the name of each object displayed in the learning imaging data.
- labels such as a person, a car, a tree, and a puddle may be given to each object. Further, in the learning labeled data, different colors may be given to each labeled object to clarify the difference from other objects.
- labels such as humans, cars, trees, and puddles are used as correct answer data when generating an image recognition trained model.
- semantic segmentation may be executed. That is, the image recognition trained model may be a model used for semantic segmentation.
- the image recognition-learned model may be a model used to generate estimation labeled data from the estimation imaging data acquired by the image sensor 11. Further, the likelihood estimation trained model is used to estimate the likelihood of each point of the estimation 3D point cloud data from the estimation labeled data, which corresponds to the estimation labeled data. It may be a model. That is, the likelihood estimation trained model is a two-dimensional image data (hereinafter, likelihood) that distinguishes between an object in which the likelihood is set to 1 and an object in which the likelihood is set to 0 from the estimated labeled data. It may be a model used to generate the degree estimation image data).
- likelihood two-dimensional image data
- the likelihood estimation image data in order to show that the likelihood of the position of the puddle is set to 0 and the likelihood of the other positions is set to 1, the position of the puddle and the other positions are set. Different colors may be given. Further, the value set as the likelihood may be any value indicating a value between 0 and 1.
- the estimation unit 13 performs image recognition processing using the estimation imaging data acquired by the imaging sensor 11, and generates estimation labeled data as the image recognition result. Specifically, the estimation unit 13 generates estimation labeled data from the estimation imaging data using the image recognition-learned model. Further, the estimation unit 13 generates the likelihood estimation image data from the estimation labeled data by using the likelihood estimation trained model.
- the estimated labeled data input to the likelihood estimation trained model is the estimated labeled data generated by using the image recognition trained model.
- the integration unit 22 converts the measured three-dimensional point cloud data acquired by the LIDAR sensor 21 into the point cloud data projected on the camera coordinate system. That is, the integration unit 22 coordinates the measured three-dimensional point cloud data acquired by the LIDAR sensor 21 to generate the two-dimensional point cloud data.
- the integration unit 22 assigns a likelihood to each point of the two-dimensional point cloud data by using the likelihood estimation image data which is two-dimensional data. Further, the integration unit 22 coordinates-converts the two-dimensional point cloud data to which the likelihood is given into the three-dimensional point cloud data, and generates the three-dimensional point cloud data with reliability.
- the information processing apparatus 20 generates an image recognition trained model for executing the image recognition process and a likelihood estimation trained model for performing the likelihood estimation. Do learning. Further, the information processing apparatus 20 is reliable by using the likelihood estimation image data obtained by inputting the estimation labeled data generated by using the image recognition trained model as the input of the likelihood estimation trained model. Prescription three-dimensional point group data can be generated.
- the information processing device 20 can extract an image of a puddle from various states of a puddle that can be included in image imaging data, for example, by executing semantic segmentation as an image recognition process.
- semantic segmentation As an image recognition process, it is necessary to learn various puddle states and determine a learning model for extracting the puddle. That is, by using semantic segmentation as the image recognition process, it is possible to easily distinguish between a puddle in which the likelihood of distance should be set to 0 and other objects.
- a desired trained model can be determined by using a smaller number of training labeled data than the number of training imaging data when the imaging data is used as the training data.
- the estimation process using the image recognition trained model and the likelihood estimation trained model, for example, only one of the image recognition trained model and the likelihood estimation trained model has high recognition accuracy. It can be replaced with a model.
- the learning unit 12 performs image recognition learning and likelihood estimation learning.
- the learning unit 12 uses learning imaging data and three-dimensional point group data with correct answers. Only the learning used may be performed. That is, the learning unit 12 may generate a model used to generate a likelihood estimation image from the estimation imaging data without performing learning using the learning labeled data.
- the estimation unit 13 does not perform the image recognition shown in FIG. 7, but inputs the estimation imaging data to the learned model generated by the learning unit 12 to generate the likelihood estimation image data.
- the likelihood estimation result is corrected according to the installation position of the lidar sensor 21.
- the smaller the incident angle of the light emitted from the LIDAR sensor 21 with respect to the ground surface the lower the intensity of the reflected light. Therefore, the smaller the angle of incidence of the light emitted from the LIDAR sensor 21 on the ground surface, the smaller the weighting value for setting the likelihood of distance to 1. That is, the smaller the incident angle of the light emitted from the LIDAR sensor 21 with respect to the ground surface, the more points the likelihood is set to 0.
- the angle of incidence of the light emitted from the LIDAR sensor 21 on the ground surface becomes smaller as the mounting angle of the LIDAR sensor 21 is upward with respect to the ground surface.
- the higher the installation position of the LIDAR sensor 21 is from the ground surface, the longer the distance from the ground surface to the LIDAR sensor 21, and the lower the intensity of the reflected light. Therefore, for example, when the incident angles are the same, the weighting value for setting the likelihood of distance to 1 may be reduced as the installation position of the LIDAR sensor 21 increases away from the ground surface. That is, among the plurality of LIDAR sensors 21 having the same incident angle, the higher the installation position of the LIDAR sensor 21 away from the ground surface, the more points where 0 is set as the likelihood.
- the information processing apparatus 20 is estimated according to at least one of the angle of incidence of the laser beam emitted from the LIDAR sensor 21 on the ground surface and the height of the LIDAR sensor 21 from the ground surface. The likelihood of the point distance can be corrected.
- FIG. 9 is a block diagram showing a configuration example of the information processing device 10 or the information processing device 20 (hereinafter referred to as the information processing device 10 or the like).
- the information processing apparatus 10 and the like include a network interface 1201, a processor 1202, and a memory 1203.
- Network interface 1201 is used to communicate with network nodes (e.g., eNB, MME, P-GW,).
- the network interface 1201 may include, for example, a network interface card (NIC) compliant with the IEEE 802.3 series.
- NIC network interface card
- the processor 1202 reads software (computer program) from the memory 1203 and executes it to perform processing of the information processing device 10 and the like described using the flowchart in the above-described embodiment.
- Processor 1202 may be, for example, a microprocessor, MPU, or CPU.
- Processor 1202 may include a plurality of processors.
- Memory 1203 is composed of a combination of volatile memory and non-volatile memory. Memory 1203 may include storage located away from processor 1202. In this case, processor 1202 may access memory 1203 via an I / O interface (not shown).
- the memory 1203 is used to store the software module group. By reading these software modules from the memory 1203 and executing the processor 1202, the processor 1202 can perform the processing of the information processing apparatus 10 and the like described in the above-described embodiment.
- each of the processors included in the information processing apparatus 10 and the like in the above-described embodiment includes one or a plurality of instructions for causing the computer to perform the algorithm described with reference to the drawings. Run the program.
- Non-temporary computer-readable media include various types of tangible storage media.
- Examples of non-temporary computer-readable media include magnetic recording media (eg flexible disks, magnetic tapes, hard disk drives), optical magnetic recording media (eg optical magnetic disks), CD-ROMs (Read Only Memory), CD-Rs, Includes CD-R / W and semiconductor memory (for example, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (RandomAccessMemory)).
- the program may also be supplied to the computer by various types of temporary computer readable media. Examples of temporary computer-readable media include electrical, optical, and electromagnetic waves.
- the temporary computer-readable medium can supply the program to the computer via a wired communication path such as an electric wire and an optical fiber, or a wireless communication path.
- (Appendix 1) With the image sensor
- the learning image data and the likelihood of the distance between the points included in the correct 3D point cloud data in the area substantially the same as the area included in the learning image data are learned as training data, and the trained model is obtained.
- the learning part to generate and Estimated data including the likelihood of the distance of points included in the estimation 3D point group data determined based on the estimation imaging data from the estimation imaging data acquired by the imaging sensor using the trained model.
- An information processing device including an estimation unit that generates a data.
- Appendix 2 The learning unit The information processing apparatus according to Appendix 1, which manages the correspondence between each point included in the three-dimensional point cloud data with a correct answer and a position corresponding to a pixel of the learning imaging data.
- Appendix 3 The information processing apparatus according to Appendix 1 or 2, wherein the likelihood of the distance of each point included in the three-dimensional point cloud data with a correct answer is determined according to the result of comparison with the distance of surrounding points.
- Appendix 4 LIDAR sensor and Any of Appendix 1 to 3, further comprising an integrated unit that assigns the likelihood of the distance of each point included in the estimated data to each point of the measured three-dimensional point cloud data acquired by the LIDAR sensor.
- (Appendix 5) The information processing apparatus according to Appendix 4, wherein the measured three-dimensional point cloud data includes a region substantially the same as a region included in the estimation imaging data.
- (Appendix 6) The learning unit The learning imaging data, the learning labeled data, and the likelihood of the distance between the points included in the correct 3D point cloud data are used as learning data.
- the estimation unit The information processing apparatus according to any one of Supplementary note 1 to 5, which generates likelihood estimation image data as the estimation data from the estimation labeled data obtained by performing image processing on the estimation imaging data.
- (Appendix 7) The estimation unit The information processing apparatus according to Appendix 6, which executes semantic segmentation as the image processing.
- the estimation unit The distance of each point generated from the estimation imaging data depends on at least one of the angle of incidence of the laser beam emitted from the LIDAR sensor on the ground surface and the height of the LIDAR sensor from the ground surface.
- the information processing apparatus according to any one of Supplementary note 4 to 7, wherein the likelihood is corrected.
- the learning image data and the likelihood of the distance between the points included in the correct 3D point cloud data in the area substantially the same as the area included in the learning image data are learned as training data, and the trained model is obtained. Generate and Using the trained model, from the estimation imaging data acquired by the imaging sensor, estimation data including the likelihood of the distance of points included in the estimation three-dimensional point cloud data determined based on the estimation imaging data is obtained.
- Data generation method to generate (Appendix 10)
- the learning image data and the likelihood of the distance between the points included in the correct 3D point cloud data in the area substantially the same as the area included in the learning image data are learned as training data, and the trained model is obtained.
- Generate and Using the trained model from the estimation imaging data acquired by the imaging sensor, estimation data including the likelihood of the distance of points included in the estimation three-dimensional point group data determined based on the estimation imaging data is obtained.
- Information processing device 11 Image sensor 12 Learning unit 13 Estimating unit 20 Information processing device 21 LIDAR sensor 22 Integration unit
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
Description
以下、図面を参照して本発明の実施の形態について説明する。図1を用いて実施の形態1にかかる情報処理装置10の構成例について説明する。情報処理装置10は、プロセッサがメモリに格納されたプログラムを実行することによって動作するコンピュータ装置であってもよい。
続いて、図2を用いて実施の形態2にかかる情報処理装置20の構成例について説明する。情報処理装置20は、撮像センサ11、学習部12、推定部13、LIDARセンサ21、及び統合部22を有している。撮像センサ11、学習部12、及び推定部13は、図1の撮像センサ11、学習部12、及び推定部13と同様であり、詳細な説明を省略する。情報処理装置20を構成する構成要素は、プロセッサがメモリに格納されたプログラムを実行することによって処理が実行されるソフトウェアもしくはモジュールであってもよい。または、構成要素は、回路もしくはチップ等のハードウェアであってもよい。
続いて、図6を用いて実施の形態3にかかる学習処理について説明する。なお、実施の形態3においても、図2の情報処理装置20を用いた処理が実行される。図6においては、学習部12が、画像認識学習及び尤度推定学習を行うことが示されている。画像認識学習は、学習用撮像データと、学習用ラベル付きデータとを用いて、画像認識に用いられるモデル(以下、画像認識学習済モデル、とする)を生成することを示している。また、尤度推定学習は、学習用ラベル付きデータと、正解付き3次元点群データとを用いて、尤度推定に用いられるモデル(以下、尤度推定学習済モデル、とする)を生成することを示している。
続いて、実施の形態4における補正処理について説明する。実施の形態4においては、LIDARセンサ21の設置位置に応じて尤度推定結果が補正されることについて説明する。例えば、LIDARセンサ21から照射された光の地表に対する入射角度が小さいほど、反射光の強度が小さくなることが想定される。そのため、LIDARセンサ21から照射された光の地表に対する入射角度が小さくなるほど、距離の尤度を1とするための重み付けの値を小さくしてもよい。つまり、LIDARセンサ21から照射された光の地表に対する入射角度が小さくなるほど、尤度として0が設定される点が多くなる。LIDARセンサ21から照射された光の地表に対する入射角度は、LIDARセンサ21の取付角度が地表に対して上向きになるほど地表に対する入射角度が小さくなる。
(付記1)
撮像センサと、
学習用撮像データと、前記学習用撮像データに含まれる領域と実質的に同じ領域の正解付3次元点群データに含まれる点の距離の尤度とを学習データとして学習し、学習済みモデルを生成する学習部と、
前記学習済みモデルを用いて、前記撮像センサにおいて取得された推定用撮像データから、前記推定用撮影データに基づいて定まる推定用3次元点群データに含まれる点の距離の尤度を含む推定データを生成する推定部と、を備える情報処理装置。
(付記2)
前記学習部は、
前記正解付3次元点群データに含まれるそれぞれの点と、前記学習用撮像データの画素に対応する位置との対応関係を管理する、付記1に記載の情報処理装置。
(付記3)
前記正解付3次元点群データに含まれるそれぞれの点の距離の尤度は、周囲の点の距離と比較した結果に応じて定められる、付記1又は2に記載の情報処理装置。
(付記4)
LIDARセンサと、
前記推定データに含められるそれぞれの点の距離の尤度を、前記LIDARセンサにおいて取得された測定3次元点群データのそれぞれの点に付与する統合部と、をさらに備える、付記1乃至3のいずれか1項に記載の情報処理装置。
(付記5)
前記測定3次元点群データは、前記推定用撮像データに含まれる領域と実質的に同じ領域を含む、付記4に記載の情報処理装置。
(付記6)
前記学習部は、
前記学習用撮像データと学習用ラベル付きデータと、前記正解付3次元点群データに含まれる点の距離の尤度とを学習データとして用い、
前記推定部は、
前記推定用撮像データを画像処理することによって得られる推定用ラベル付きデータから前記推定データとして尤度推定画像データを生成する、付記1乃至5のいずれか1項に記載の情報処理装置。
(付記7)
前記推定部は、
前記画像処理として、セマンティックセグメンテーションを実行する、付記6に記載の情報処理装置。
(付記8)
前記推定部は、
前記LIDARセンサから照射されるレーザ光の地表への入射角度、及び、前記LIDARセンサの地表からの高さ、の少なくとも一方に応じて、前記推定用撮像データから生成されるそれぞれの点の距離の尤度を補正する、付記4乃至7のいずれか1項に記載の情報処理装置。
(付記9)
学習用撮像データと、前記学習用撮像データに含まれる領域と実質的に同じ領域の正解付3次元点群データに含まれる点の距離の尤度とを学習データとして学習し、学習済みモデルを生成し、
前記学習済みモデルを用いて、撮像センサにおいて取得された推定用撮像データから、前記推定用撮影データに基づいて定まる推定用3次元点群データに含まれる点の距離の尤度を含む推定データを生成する、データ生成方法。
(付記10)
学習用撮像データと、前記学習用撮像データに含まれる領域と実質的に同じ領域の正解付3次元点群データに含まれる点の距離の尤度とを学習データとして学習し、学習済みモデルを生成し、
前記学習済みモデルを用いて、撮像センサにおいて取得された推定用撮像データから、前記推定用撮影データに基づいて定まる推定用3次元点群データに含まれる点の距離の尤度を含む推定データを生成することをコンピュータに実行させるプログラム。
11 撮像センサ
12 学習部
13 推定部
20 情報処理装置
21 LIDARセンサ
22 統合部
Claims (10)
- 撮像センサと、
学習用撮像データと、前記学習用撮像データに含まれる領域と実質的に同じ領域の正解付3次元点群データに含まれる点の距離の尤度とを学習データとして学習し、学習済みモデルを生成する学習手段と、
前記学習済みモデルを用いて、前記撮像センサにおいて取得された推定用撮像データから、前記推定用撮像データに基づいて定まる推定用3次元点群データに含まれる点の距離の尤度を含む推定データを生成する推定手段と、を備える情報処理装置。 - 前記学習手段は、
前記正解付3次元点群データに含まれるそれぞれの点と、前記学習用撮像データの画素に対応する位置との対応関係を管理する、請求項1に記載の情報処理装置。 - 前記正解付3次元点群データに含まれるそれぞれの点の距離の尤度は、周囲の点の距離と比較した結果に応じて定められる、請求項1又は2に記載の情報処理装置。
- LIDARセンサと、
前記推定データに含められるそれぞれの点の距離の尤度を、前記LIDARセンサにおいて取得された測定3次元点群データのそれぞれの点に付与する統合手段と、をさらに備える、請求項1乃至3のいずれか1項に記載の情報処理装置。 - 前記測定3次元点群データは、前記推定用撮像データに含まれる領域と実質的に同じ領域を含む、請求項4に記載の情報処理装置。
- 前記学習手段は、
前記学習用撮像データと学習用ラベル付きデータと、前記正解付3次元点群データに含まれる点の距離の尤度とを学習データとして用い、
前記推定手段は、
前記推定用撮像データを画像処理することによって得られる推定用ラベル付きデータから前記推定データとして尤度推定画像データを生成する、請求項1乃至5のいずれか1項に記載の情報処理装置。 - 前記推定手段は、
前記画像処理として、セマンティックセグメンテーションを実行する、請求項6に記載の情報処理装置。 - 前記推定手段は、
LIDARセンサから照射されるレーザ光の地表への入射角度、及び、前記LIDARセンサの地表からの高さ、の少なくとも一方に応じて、前記推定用撮像データから生成されるそれぞれの点の距離の尤度を補正する、請求項4乃至7のいずれか1項に記載の情報処理装置。 - 学習用撮像データと、前記学習用撮像データに含まれる領域と実質的に同じ領域の正解付3次元点群データに含まれる点の距離の尤度とを学習データとして学習し、学習済みモデルを生成し、
前記学習済みモデルを用いて、撮像センサにおいて取得された推定用撮像データから、前記推定用撮像データに基づいて定まる推定用3次元点群データに含まれる点の距離の尤度を含む推定データを生成する、データ生成方法。 - 学習用撮像データと、前記学習用撮像データに含まれる領域と実質的に同じ領域の正解付3次元点群データに含まれる点の距離の尤度とを学習データとして学習し、学習済みモデルを生成し、
前記学習済みモデルを用いて、撮像センサにおいて取得された推定用撮像データから、前記推定用撮像データに基づいて定まる推定用3次元点群データに含まれる点の距離の尤度を含む推定データを生成することをコンピュータに実行させるプログラムが格納された非一時的なコンピュータ可読媒体。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021533869A JP7226553B2 (ja) | 2019-07-22 | 2020-06-19 | 情報処理装置、データ生成方法、及びプログラム |
US17/628,750 US20220270282A1 (en) | 2019-07-22 | 2020-06-19 | Information processing device, data generation method, and non-transitory computer-readable medium storing program |
CA3148404A CA3148404A1 (en) | 2019-07-22 | 2020-06-19 | Information processing device, data generation method, and non-transitory computer-readable medium storing program |
EP20843522.2A EP4006829A4 (en) | 2019-07-22 | 2020-06-19 | INFORMATION PROCESSING DEVICE, METHOD FOR GENERATING DATA AND NON-TRANSITORY COMPUTER READABLE MEDIA ON WHICH A PROGRAM IS STORED |
AU2020317303A AU2020317303B2 (en) | 2019-07-22 | 2020-06-19 | Information processing device, data generation method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-134718 | 2019-07-22 | ||
JP2019134718 | 2019-07-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021014846A1 true WO2021014846A1 (ja) | 2021-01-28 |
Family
ID=74193029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/024062 WO2021014846A1 (ja) | 2019-07-22 | 2020-06-19 | 情報処理装置、データ生成方法、及びプログラムが格納された非一時的なコンピュータ可読媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220270282A1 (ja) |
EP (1) | EP4006829A4 (ja) |
JP (1) | JP7226553B2 (ja) |
AU (1) | AU2020317303B2 (ja) |
CA (1) | CA3148404A1 (ja) |
WO (1) | WO2021014846A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023021559A1 (ja) * | 2021-08-16 | 2023-02-23 | 日本電気株式会社 | 推定モデル訓練装置、推定モデル訓練方法、認識装置、認識方法、及び非一時的なコンピュータ可読媒体 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013142991A (ja) * | 2012-01-10 | 2013-07-22 | Nippon Telegr & Teleph Corp <Ntt> | 物体領域検出装置、方法、及びプログラム |
WO2017057061A1 (ja) * | 2015-09-30 | 2017-04-06 | ソニー株式会社 | 情報処理装置、情報処理方法、及び、プログラム |
JP2019008460A (ja) | 2017-06-22 | 2019-01-17 | 株式会社東芝 | 物体検出装置、物体検出方法およびプログラム |
-
2020
- 2020-06-19 EP EP20843522.2A patent/EP4006829A4/en active Pending
- 2020-06-19 US US17/628,750 patent/US20220270282A1/en active Pending
- 2020-06-19 WO PCT/JP2020/024062 patent/WO2021014846A1/ja unknown
- 2020-06-19 AU AU2020317303A patent/AU2020317303B2/en not_active Expired - Fee Related
- 2020-06-19 JP JP2021533869A patent/JP7226553B2/ja active Active
- 2020-06-19 CA CA3148404A patent/CA3148404A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013142991A (ja) * | 2012-01-10 | 2013-07-22 | Nippon Telegr & Teleph Corp <Ntt> | 物体領域検出装置、方法、及びプログラム |
WO2017057061A1 (ja) * | 2015-09-30 | 2017-04-06 | ソニー株式会社 | 情報処理装置、情報処理方法、及び、プログラム |
JP2019008460A (ja) | 2017-06-22 | 2019-01-17 | 株式会社東芝 | 物体検出装置、物体検出方法およびプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP4006829A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023021559A1 (ja) * | 2021-08-16 | 2023-02-23 | 日本電気株式会社 | 推定モデル訓練装置、推定モデル訓練方法、認識装置、認識方法、及び非一時的なコンピュータ可読媒体 |
Also Published As
Publication number | Publication date |
---|---|
AU2020317303B2 (en) | 2023-12-07 |
EP4006829A1 (en) | 2022-06-01 |
AU2020317303A1 (en) | 2022-02-17 |
JP7226553B2 (ja) | 2023-02-21 |
US20220270282A1 (en) | 2022-08-25 |
CA3148404A1 (en) | 2021-01-28 |
JPWO2021014846A1 (ja) | 2021-01-28 |
EP4006829A4 (en) | 2022-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7052663B2 (ja) | 物体検出装置、物体検出方法及び物体検出用コンピュータプログラム | |
US11455565B2 (en) | Augmenting real sensor recordings with simulated sensor data | |
US11487988B2 (en) | Augmenting real sensor recordings with simulated sensor data | |
US10288418B2 (en) | Information processing apparatus, information processing method, and storage medium | |
KR101283262B1 (ko) | 영상 처리 방법 및 장치 | |
JP7204326B2 (ja) | 情報処理装置及びその制御方法及びプログラム、並びに、車両の運転支援システム | |
JP6305171B2 (ja) | シーン内の物体を検出する方法 | |
JP4691701B2 (ja) | 人数検出装置及び方法 | |
Gschwandtner et al. | Infrared camera calibration for dense depth map construction | |
CN107016348A (zh) | 结合深度信息的人脸检测方法、检测装置和电子装置 | |
JP2020061140A (ja) | ブラインドスポットモニタリングのためのcnnの学習方法、テスティング方法、学習装置、及びテスティング装置 | |
Utaminingrum et al. | Fast obstacle distance estimation using laser line imaging technique for smart wheelchair | |
WO2021014846A1 (ja) | 情報処理装置、データ生成方法、及びプログラムが格納された非一時的なコンピュータ可読媒体 | |
JP6351917B2 (ja) | 移動物体検出装置 | |
Wang et al. | Acmarker: Acoustic camera-based fiducial marker system in underwater environment | |
JP2009288917A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP2020149186A (ja) | 位置姿勢推定装置、学習装置、移動ロボット、位置姿勢推定方法、学習方法 | |
WO2018119823A1 (en) | Technologies for lidar based moving object detection | |
JP2020061139A (ja) | ブラインドスポットモニタリングのためのcnnの学習方法、テスティング方法、学習装置、及びテスティング装置 | |
JP2005028903A (ja) | パンタグラフ支障物検出方法及び装置 | |
WO2022214821A2 (en) | Monocular depth estimation | |
JP2023008030A (ja) | 画像処理システム、画像処理方法及び画像処理プログラム | |
Tupper et al. | Pedestrian proximity detection using RGB-D data | |
US12008778B2 (en) | Information processing apparatus, control method for same, non-transitory computer-readable storage medium, and vehicle driving support system | |
US20230351765A1 (en) | Systems and methods for detecting a reflection artifact in a point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20843522 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021533869 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3148404 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020317303 Country of ref document: AU Date of ref document: 20200619 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020843522 Country of ref document: EP Effective date: 20220222 |