CN113689393A - Three-dimensional target detection algorithm based on image and point cloud example matching - Google Patents

Three-dimensional target detection algorithm based on image and point cloud example matching Download PDF

Info

Publication number
CN113689393A
CN113689393A CN202110957304.7A CN202110957304A CN113689393A CN 113689393 A CN113689393 A CN 113689393A CN 202110957304 A CN202110957304 A CN 202110957304A CN 113689393 A CN113689393 A CN 113689393A
Authority
CN
China
Prior art keywords
point cloud
image
target
data
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110957304.7A
Other languages
Chinese (zh)
Inventor
耿可可
李尚杰
殷国栋
庄伟超
祝小元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110957304.7A priority Critical patent/CN113689393A/en
Publication of CN113689393A publication Critical patent/CN113689393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional target detection algorithm based on image and point cloud example matching, which integrates abundant semantic information in an image and accurate position information in point cloud to realize advantage complementation of different types of sensor data, and specifically comprises the following steps: obtaining the category and the example mask of a target in the image by utilizing a real-time example segmentation network; projecting the point cloud to an image plane by perspective projection transformation; clustering the point clouds in each example mask to obtain a target point cloud; and fitting the three-dimensional contour of the target point cloud and acquiring related parameters of the target. According to the method, rich semantic information in the image and accurate position information in the point cloud are fused, so that the advantage complementation of different types of sensor data is realized, and the output result is a three-dimensional detection result of the target of interest. The method can improve the operation efficiency on the premise of ensuring the precision, has wide applicable scenes and has better robustness.

Description

Three-dimensional target detection algorithm based on image and point cloud example matching
Technical Field
The invention relates to the technical field of target detection, in particular to a three-dimensional target detection algorithm based on image and point cloud example matching.
Background
The deep learning related method is a main means for solving the problem of target detection. Object detection networks, such as FasterR-CNN, YOLO, SSD, etc., that were originally proposed, represent the location of objects on images with rectangular frames. This representation is not accurate because the rectangular box contains a lot of background information in addition to the object. On this basis, some example segmentation networks are proposed, such as Mask R-CNN, YOLACT, etc., to represent the location of the target with an example Mask at the pixel level. The example mask contains little background information, so the accuracy of the expression is greatly improved. However, whether represented by a rectangular frame or an example mask, the obtained result is a two-dimensional detection result based on an image, and it is difficult to reflect the position of the object in the actual three-dimensional environment.
The two types of sensor data, image and point cloud, are characterized by each and have disadvantages. Image data can express rich semantic information, but cannot accurately give the spatial position of a target. For example, for an input image, the above-mentioned object detection network may predict the objects contained in the image, so as to obtain the category of each object in the image and the pixel region occupied by each object, but cannot obtain the coordinate position or distance information of the object in the three-dimensional space; the point cloud data contains large-range and high-precision spatial position information, but the capability of expressing semantic information is weak. For example, a large number of reflection points of surrounding objects can be obtained by each scanning of the multi-line laser radar, one person has hundreds of reflection points, one vehicle has thousands of reflection points, and the ground has tens of thousands of reflection points, but how to extract the point clouds of interested targets such as pedestrians and vehicles from the background point clouds is a technical problem.
The fusion of the image and the point cloud can realize deep understanding of the actual environment. A more common fusion strategy is to project a point cloud onto an image, thereby unifying the dimensionality of the two data. Most of the solutions disclosed at present send images and point clouds to a target detection network or an example segmentation network simultaneously, and the method can realize accurate prediction, but because the network needs to act on the images and the point clouds respectively, the calculation efficiency is low and the time consumption is long.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a three-dimensional target detection algorithm based on image and point cloud example matching, wherein the input information of the algorithm comprises two data of an image and a point cloud, the advantage complementation of different types of sensor data is realized by fusing abundant semantic information in the image and accurate position information in the point cloud, and the output result is the three-dimensional detection result of an interested target, namely pose information.
The technical scheme adopted by the invention is as follows:
a three-dimensional target detection algorithm based on image and point cloud example matching comprises the following steps:
s0, synchronously acquiring image and point cloud data in time and space;
s1, utilizing the real-time instance to segment the network processing image:
inputting the image into a real-time instance segmentation network, predicting the category and the instance mask of each target in the input image, and correspondingly giving the confidence coefficient of each target;
s2, processing the point cloud data by perspective projection transformation:
inputting the point cloud data into perspective projection transformation, and projecting the input point cloud to an image plane to keep the dimensionality of the image data and the dimensionality of the point cloud data consistent;
s3, fusing the results of the step S1 and the step S2:
fusing the image and the point cloud at the same time aiming at the data collected by the same perception space, clustering the point cloud in each example mask by using the example mask in the image as a limiting boundary, and extracting the point cloud belonging to a target, namely the target point cloud, from the point cloud after perspective projection;
and S4, fitting the three-dimensional contour of the target point cloud and acquiring related parameters of the target.
The further technical scheme is as follows:
step S2 specifically includes:
performing regional clipping on the point cloud, clipping the point cloud by taking the field angle of the image as constraint, keeping the point cloud in the image field, enabling the effective sensing range of the image and the point cloud to be the same, and discarding points with longer distance in the point cloud to improve the operation efficiency; and then projected to an image plane in a perspective projection mode, so that the dimensionality of the image data and the point cloud data is kept consistent.
In step S3, first, the nearest neighbor timestamp method is used to synchronize the result of step S1, i.e., the target category and the example mask in the image, with the result of step S2, i.e., the point cloud after perspective projection, and then the example mask in the image is used as a limiting boundary to cluster the point cloud in each example mask;
in step S3, a density-based clustering algorithm is used to divide the point cloud set higher than the allowable density into clusters, and the largest point cloud cluster is selected as the target point cloud, thereby completing the instance matching.
In step S4, after the target point cloud is extracted, a three-dimensional box model is used to fit the target point cloud, and the result of fusion detection is represented in a parameterized form, where the content represented in the parameterization includes the center position of the target, the three-dimensional size of the target, and the direction angle of the target.
In step S1, all results predicted from the example partition network are filtered according to the following two conditions:
first, the confidence of the detection result should be higher than the confidence threshold to avoid false detection;
secondly, the detection results are sorted according to the confidence degree descending order, and the number does not exceed the number threshold value, so that the data fusion process is ensured to have higher operation efficiency.
The invention has the following beneficial effects:
the invention utilizes real-time instance segmentation network and perspective projection transformation to respectively process image and point cloud data, and uses instance mask to extract the point cloud of the target: combining abundant semantic information in the image and accurate position information in the point cloud to accurately acquire the target point cloud so as to fit the three-dimensional pose state of the target; the example segmentation network only acts on the image and does not act on the point cloud, so that the efficiency of the algorithm can be maintained at a higher level, the three-dimensional target detection process with low time delay is realized, the operation efficiency is higher on the premise of ensuring the precision, the application scene is wide, and the robustness is better.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention.
FIG. 2 is a diagram illustrating the effect of each step of the algorithm according to the embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention with reference to the drawings.
The invention relates to a three-dimensional target detection algorithm based on image and point cloud example matching, please refer to fig. 1, which comprises the following steps:
s0, synchronously acquiring image and point cloud data in time and space;
s1, utilizing the real-time instance to divide the network processing image, and obtaining the object type and the instance mask in the image:
inputting the image into a real-time instance segmentation network, predicting the category and the instance mask of each target in the input image, and correspondingly giving the confidence coefficient of each target;
s2, processing the point cloud data by perspective projection transformation, and projecting the point cloud to an image plane by perspective projection transformation:
inputting the point cloud data into perspective projection transformation, and projecting the input point cloud to an image plane to keep the dimensionality of the image data and the dimensionality of the point cloud data consistent;
s3, fusing the processing results of the step S1 and the step S2:
fusing the image and the point cloud at the same time aiming at the data collected by the same perception space, clustering the point cloud in each example mask by using the example mask in the image as a limiting boundary, and extracting the point cloud belonging to a target, namely the target point cloud, from the point cloud after perspective projection;
and S4, fitting the three-dimensional contour of the target point cloud and acquiring related parameters of the target.
The invention does not limit the type of the example segmentation network, as long as the effect of real-time operation can be achieved.
The algorithm of the present invention is further described below with reference to specific embodiments.
Referring to fig. 2, in step S1, a yolcat real-time instance segmentation network is used to predict a target class, an instance mask and a corresponding confidence in an image, ResNet50 is used as a target detection backbone network of yolcat, a processed image size is set to 550 × 550, the prediction result of the yolcat real-time instance segmentation network includes a class, an instance mask and a confidence of each target, available predictions and erroneous predictions are retained for all the results of the yolcat prediction, and the filtering is performed according to the following two conditions:
firstly, the confidence of the detection result should be higher than the confidence threshold, i.e. the confidence score _ threshold is greater than or equal to 0.5, so as to avoid some false detections; secondly, the detection results are sorted in a confidence degree descending order, and the number of the detection results does not exceed a number threshold value, namely the target number top _ k is less than or equal to 20, so that the data fusion process is ensured to have higher operation efficiency.
Referring to fig. 2, in step S2, the point cloud is used as an input, the point cloud is subjected to area clipping, the point cloud is clipped with the field angle of the image as a constraint, the point cloud within the field of view of the image is retained, the effective sensing range of the image and the point cloud is the same, meanwhile, the too far portion of the point cloud is discarded to improve the operation efficiency, and then the point cloud is projected to the image plane in a perspective projection manner, so that the dimensions of the image data and the point cloud data are kept consistent.
Specifically, point cloud data is used as input, area clipping is carried out on the point cloud, the angle of view of an image is used as constraint to clip the point cloud, the point cloud in the image field of view is reserved, and any point p (x, y, z) meets the following requirements:
Figure BDA0003218980100000031
meanwhile, the point cloud which is too far away in the point cloud is abandoned to improve the operation efficiency, so that any point p (x, y, z) meets the following requirements:
Figure BDA0003218980100000032
and (4) projecting the image plane in a perspective projection mode according to the formulas (3) and (4), so that the dimensionality of the image data and the point cloud data is kept consistent.
Figure BDA0003218980100000041
Figure BDA0003218980100000042
In the formulae (3) and (4), (x)c,yc,zc) Representing the coordinate of the point p after rigid body transformation; m is a rigid transformation matrix including a rotation matrix R3×3And translation matrix T3×1(ii) a (u, v) represents the coordinates of point p in the image after perspective projection; f. ofu、fv、u0、v0Representing the internal parameters involved in the perspective projection.
The premise of effective projection is that image and point cloud data are synchronously acquired in time and space, the smaller the space-time synchronous error is, the better the projection effect is, and the more accurate the extracted target point cloud is.
In step S3, the target category and the instance mask in the image, which are the results of step S1, and the point cloud after perspective projection, which is the result of step S2, are synchronized first, and the method used is the nearest neighbor time stamp method.
The synchronization method specifically comprises the following steps: and searching the image time stamp with the minimum interval by taking the point cloud time stamp as a reference, and receiving data of the image time stamp and the image time stamp.
Referring to fig. 2, after receiving the results of steps S1 and S2 synchronously, the point clouds belonging to the respective targets are extracted from the point clouds projected in perspective using the example mask of the image as a limiting boundary, and the fusion of the semantic information and the position information of the targets is completed.
The semantic information of the target is embodied in the category and the example mask area of the target in the image, the position information of the target is hidden in the point cloud data in a large range, and the point cloud to which the target belongs is extracted from the whole point cloud space through the example mask, so that the example matching process is realized, and the semantic information and the position information are fused.
The obtained target point cloud will have some noise due to the presence of synchronization errors. In order to remove noise in the target point cloud, clustering based on the allowable density is performed using the DBSCAN algorithm, a distance threshold eps is set to 0.5, and a minimum number threshold min _ samples is set to 5, that is, the allowable density is defined to include at least 5 points within 0.5m, and points that do not satisfy the allowable density are regarded as outliers and deleted.
In the above steps S1 and S2, the example segmentation network for processing the image and the perspective projection transformation for processing the point cloud are two branches working in parallel, without interference, and the processing results of the two branches are merged in step S3.
In step S4, fitting the three-dimensional contour of the target point cloud and obtaining relevant parameters of the target, specifically including:
after extracting the target point cloud, fitting the target point cloud by adopting a three-dimensional box model, and representing the result of fusion detection in a parameterized form, wherein the content represented in the parameterized form comprises the central position of the target, the three-dimensional size of the target and the direction angle of the target. The concrete formula is shown as (5):
Figure BDA0003218980100000043
in the formula (5), (x)0,y0,z0) Representing the geometric center coordinates of the target; (l, w, h) represents the length, width, height of the target;
Figure BDA0003218980100000044
indicating the orientation angle of the target.
The example segmentation network used in the embodiment only acts on the image, but not on the point cloud, and has the advantage of low time delay; by utilizing the example mask predicted by the example segmentation network, the point cloud (target point cloud) to which the target belongs can be accurately extracted, and the accuracy is high; and extracting the target point cloud, and then processing the target point cloud, thereby improving the operation efficiency of the algorithm on the premise of ensuring the precision.

Claims (6)

1. A three-dimensional target detection algorithm based on image and point cloud example matching is characterized by comprising the following steps:
s0, synchronously acquiring image and point cloud data in time and space;
s1, utilizing the real-time instance to segment the network processing image:
inputting the image into a real-time instance segmentation network, predicting the category and the instance mask of each target in the input image, and correspondingly giving the confidence coefficient of each target;
s2, processing the point cloud data by perspective projection transformation:
inputting the point cloud data into perspective projection transformation, and projecting the input point cloud to an image plane to keep the dimensionality of the image data and the dimensionality of the point cloud data consistent;
s3, fusing the results of the step S1 and the step S2:
fusing the image and the point cloud at the same time aiming at the data collected by the same perception space, clustering the point cloud in each example mask by using the example mask in the image as a limiting boundary, and extracting the point cloud belonging to a target, namely the target point cloud, from the point cloud after perspective projection;
and S4, fitting the three-dimensional contour of the target point cloud and acquiring related parameters of the target.
2. The algorithm according to claim 1, wherein step S2 specifically comprises:
performing regional clipping on the point cloud, clipping the point cloud by taking the field angle of the image as constraint, keeping the point cloud in the image field, enabling the effective sensing range of the image and the point cloud to be the same, and discarding points with longer distance in the point cloud to improve the operation efficiency; and then projected to an image plane in a perspective projection mode, so that the dimensionality of the image data and the point cloud data is kept consistent.
3. The algorithm of claim 1, wherein in step S3, the result of step S1, i.e. the object class and the instance mask in the image, and the result of step S2, i.e. the perspective projected point cloud, are synchronized by using the nearest neighbor timestamp method, and then the point cloud in each instance mask is clustered by using the instance mask in the image as a limiting boundary.
4. The algorithm of claim 1, wherein in step S3, a density-based clustering algorithm is used to divide the point cloud set with higher than allowable density into clusters, and the largest point cloud cluster is selected as the target point cloud, thereby completing the instance matching.
5. The algorithm of claim 1, wherein in step S4, after the target point cloud is extracted, the target point cloud is fitted by using a three-dimensional box model, and the result of the fusion detection is represented in a parameterized form, and the parameterized representation includes a center position of the target, a three-dimensional size of the target, and an orientation angle of the target.
6. The algorithm of claim 1, wherein in step S1, all results of the prediction of the instance splitting network are filtered according to the following two conditions:
first, the confidence of the detection result should be higher than the confidence threshold to avoid false detection;
secondly, the detection results are sorted according to the confidence degree descending order, and the number does not exceed the number threshold value, so that the data fusion process is ensured to have higher operation efficiency.
CN202110957304.7A 2021-08-19 2021-08-19 Three-dimensional target detection algorithm based on image and point cloud example matching Pending CN113689393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110957304.7A CN113689393A (en) 2021-08-19 2021-08-19 Three-dimensional target detection algorithm based on image and point cloud example matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110957304.7A CN113689393A (en) 2021-08-19 2021-08-19 Three-dimensional target detection algorithm based on image and point cloud example matching

Publications (1)

Publication Number Publication Date
CN113689393A true CN113689393A (en) 2021-11-23

Family

ID=78580801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110957304.7A Pending CN113689393A (en) 2021-08-19 2021-08-19 Three-dimensional target detection algorithm based on image and point cloud example matching

Country Status (1)

Country Link
CN (1) CN113689393A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115520317A (en) * 2022-09-28 2022-12-27 江苏科技大学 Sea ice image acquisition device and sea ice identification method
CN116703952A (en) * 2023-08-09 2023-09-05 深圳魔视智能科技有限公司 Method and device for filtering occlusion point cloud, computer equipment and storage medium
CN117315030A (en) * 2023-10-18 2023-12-29 四川大学 Three-dimensional visual positioning method and system based on progressive point cloud-text matching

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN110264416A (en) * 2019-05-28 2019-09-20 深圳大学 Sparse point cloud segmentation method and device
CN111626217A (en) * 2020-05-28 2020-09-04 宁波博登智能科技有限责任公司 Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion
CN112396650A (en) * 2020-03-30 2021-02-23 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN110264416A (en) * 2019-05-28 2019-09-20 深圳大学 Sparse point cloud segmentation method and device
CN112396650A (en) * 2020-03-30 2021-02-23 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar
CN111626217A (en) * 2020-05-28 2020-09-04 宁波博登智能科技有限责任公司 Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115520317A (en) * 2022-09-28 2022-12-27 江苏科技大学 Sea ice image acquisition device and sea ice identification method
CN116703952A (en) * 2023-08-09 2023-09-05 深圳魔视智能科技有限公司 Method and device for filtering occlusion point cloud, computer equipment and storage medium
CN116703952B (en) * 2023-08-09 2023-12-08 深圳魔视智能科技有限公司 Method and device for filtering occlusion point cloud, computer equipment and storage medium
CN117315030A (en) * 2023-10-18 2023-12-29 四川大学 Three-dimensional visual positioning method and system based on progressive point cloud-text matching
CN117315030B (en) * 2023-10-18 2024-04-16 四川大学 Three-dimensional visual positioning method and system based on progressive point cloud-text matching

Similar Documents

Publication Publication Date Title
JP7167397B2 (en) Method and apparatus for processing point cloud data
US11971726B2 (en) Method of constructing indoor two-dimensional semantic map with wall corner as critical feature based on robot platform
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
KR102145109B1 (en) Methods and apparatuses for map generation and moving entity localization
CN113689393A (en) Three-dimensional target detection algorithm based on image and point cloud example matching
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
CN111060924B (en) SLAM and target tracking method
CN110688905B (en) Three-dimensional object detection and tracking method based on key frame
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN110570457B (en) Three-dimensional object detection and tracking method based on stream data
CN111881790A (en) Automatic extraction method and device for road crosswalk in high-precision map making
CN112825192B (en) Object identification system and method based on machine learning
CN115049700A (en) Target detection method and device
CN115240047A (en) Laser SLAM method and system fusing visual loopback detection
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN116229408A (en) Target identification method for fusing image information and laser radar point cloud information
CN116109601A (en) Real-time target detection method based on three-dimensional laser radar point cloud
CN113673400A (en) Real scene three-dimensional semantic reconstruction method and device based on deep learning and storage medium
CN113989744A (en) Pedestrian target detection method and system based on oversized high-resolution image
CN115830265A (en) Automatic driving movement obstacle segmentation method based on laser radar
CN110909656B (en) Pedestrian detection method and system integrating radar and camera
CN116573017A (en) Urban rail train running clearance foreign matter sensing method, system, device and medium
CN115063550A (en) Semantic point cloud map construction method and system and intelligent robot
CN109934096B (en) Automatic driving visual perception optimization method based on characteristic time sequence correlation
CN113536959A (en) Dynamic obstacle detection method based on stereoscopic vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination