CN113569652A - Method for detecting short obstacles by automatic parking all-round looking camera - Google Patents

Method for detecting short obstacles by automatic parking all-round looking camera Download PDF

Info

Publication number
CN113569652A
CN113569652A CN202110737067.3A CN202110737067A CN113569652A CN 113569652 A CN113569652 A CN 113569652A CN 202110737067 A CN202110737067 A CN 202110737067A CN 113569652 A CN113569652 A CN 113569652A
Authority
CN
China
Prior art keywords
point
image
target
automatic parking
short
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110737067.3A
Other languages
Chinese (zh)
Inventor
冉友廷
卢金波
郑敏鹏
唐冰锋
贺武
任淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd
Original Assignee
Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd filed Critical Huizhou Desay SV Intelligent Transport Technology Research Institute Co Ltd
Priority to CN202110737067.3A priority Critical patent/CN113569652A/en
Publication of CN113569652A publication Critical patent/CN113569652A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting short obstacles by an automatic parking look-around camera, which collects images around a vehicle by a vehicle-mounted look-around camera, target labeling is carried out on low obstacles in the image, a convolutional neural network structure training model is constructed to train the data of the target labeling respectively to obtain a low obstacle target pixel point set, discretizing the low obstacle target pixel point set, selecting key points, calculating the feature descriptors of the key points, matching key points of the upper frame and the lower frame by using the feature descriptors, screening out successfully matched key points, finally calculating the position difference of a point pair Q successfully matched in the upper image and the lower image by combining the vehicle motion information, calculating the height of a short barrier according to the distance difference, therefore, the type, position and height information of the short obstacle can be effectively detected, and effective information is provided for automatic parking.

Description

Method for detecting short obstacles by automatic parking all-round looking camera
Technical Field
The invention relates to the technical field of millimeter wave radars, in particular to a method for detecting short obstacles by using an automatic parking look-around camera.
Background
The automatic parking uses the vehicle-mounted looking-around perception and the detection information of the ultrasonic probe as the basis of the parking control. The obstacle detection and the obstacle ranging need to rely on ultrasonic waves, and large objects such as vehicles and the like can be generally and effectively identified; ground lock, wheel chock, fence, ice cream tube and other short obstacles can cause the ultrasonic wave to be unidentified because of the insufficient echo information of the ultrasonic wave, can seriously reduce the security of the process of parking. In addition, because there is the difference in formation of image and the ultrasonic ranging of camera, the ultrasonic wave does not receive the influence of external conditions such as light, but the formation of image of camera receives the light influence obviously, in the low illumination condition such as night, can lead to the formation of image unusual, also further causes the obstacle to survey information inaccurate.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a method for detecting short obstacles by using an automatic parking look-around camera, which detects short obstacles by using a deep learning target detection technique, classifies the short obstacles by using the deep learning technique, and effectively calculates the position and height of the short obstacles by using monocular camera ranging, thereby effectively providing accurate information for automatic parking.
Specifically, the invention provides a method for detecting short obstacles by using an automatic parking all-round looking camera, which comprises the following steps:
s1, acquiring images around the vehicle through the vehicle-mounted all-round-looking camera, and carrying out target marking on short obstacles in the images;
s2: constructing a convolutional neural network structure training model to perform semantic segmentation training and 2D target training on the data labeled by the target respectively;
s3: selecting a pixel point set with the semantic segmentation classification probability in the 2D frame being the same as the maximum probability of the semantic segmentation result according to the 2D target information obtained by training, wherein the pixel point set is the pixel point set of the same low obstacle; discretizing the identified low and short obstacle target pixel point set, selecting key points, and calculating feature descriptors of the key points;
s4: matching key points of an upper frame and a lower frame by using the feature descriptors, and screening out the key points which are successfully matched;
s5: when the vehicle moves, the position difference of the point pair Q which is successfully matched in the upper image and the lower image is calculated by combining the vehicle movement information, and the height of a short obstacle is calculated according to the distance difference.
Wherein, the S1 further includes:
sequentially marking the background, the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles with colors by adopting a semantic segmentation marking method according to the outline of the target;
and marking the types of the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles in sequence according to the 2D frame by adopting a 2D target marking method.
Further, the semantic segmentation training includes: acquiring a semantic segmentation annotation training set, wherein the semantic segmentation annotation training set comprises a sample image and color annotation labels, generating a mask image corresponding to the sample image according to classification information by using the color annotation labels, and keeping the sample image and the mask image in the same size; inputting a sample image and a mask image of a semantic segmentation labeling training set into a pre-constructed semantic segmentation model to obtain classification probability corresponding to each pixel of the sample image; and updating parameters of the semantic segmentation model based on the classification probability and the mask map to obtain the trained semantic segmentation model.
The 2D target training comprises: acquiring a 2D target sample training set, and processing the image by adopting a target detection network to obtain a 2D target detection result of the sample image; the target detection network comprises one or more of a backbone network, a characteristic convolution layer, a maximum pooling network, a full connection layer and a regional convolution neural network RCNN; after space search is carried out in the image according to the 2D target detection network, residual calculation is carried out on the 2D target detection network and the labeled 2D frame, and then iterative updating is carried out to obtain an optimal training weight result.
The selecting key points further comprises: and sorting all classified pixel coordinates from small to large according to the Y value, wherein the maximum value and the minimum value of each row of X coordinates after sorting are edge points, and all screened edge points are used as key points.
The calculating the feature descriptors of the key points further comprises: calculating the local feature of each key point by using the local feature descriptor BEBLID to obtain the local texture information of each key point, expressing the local texture information as binary digitalization vectors V in different directions on the image,
Figure 704285DEST_PATH_IMAGE001
where n is the dimension of the binary vector v.
The S4 further includes: establishing a corresponding relation according to the descriptors of the two frames of images, calculating different Hamming distances between the numerical vectors of each pair of descriptors, wherein the Hamming distances are calculated by counting the same or different numbers in two groups of binary numerical vectors, and calculating a Hamming weight w according to the same number m:
Figure 780695DEST_PATH_IMAGE002
when w is>When 0.8, the two sets of points are considered to be matched, and the matched points are stored as a point pair Q.
The S5 further includes:
the interval between the upper and lower images is
Figure 925368DEST_PATH_IMAGE003
Calculating displacements in the lateral and longitudinal directions and displacements in the longitudinal direction from wheel speed information in the motion of the vehicle
Figure 569976DEST_PATH_IMAGE004
And
Figure 760786DEST_PATH_IMAGE005
the formula is as follows:
Figure 142351DEST_PATH_IMAGE006
Figure 469427DEST_PATH_IMAGE007
wherein the vehicle has a velocity vx in the lateral direction and vy in the longitudinal direction; the displacement of the two sets of points in the image is
Figure 160303DEST_PATH_IMAGE008
And
Figure 103988DEST_PATH_IMAGE009
the position difference equation is:
Figure 397566DEST_PATH_IMAGE010
further, in the moving process of the vehicle, if the point pair successfully matched is a point on the ground, no position difference exists; if the distance difference exists, the point pair Q on the matching is not the point on the ground, the relation between the height and the position difference of the point is a linear relation, and the formula is as follows:
Figure 500520DEST_PATH_IMAGE011
wherein a is a weight coefficient.
Further, the median position coordinate in the current top view around the point pair Q is
Figure 955772DEST_PATH_IMAGE012
Combining the height H of the point, and calculating the real position of the point according to the similar triangle principle
Figure 262120DEST_PATH_IMAGE013
(ii) a And sorting the height information of all the matching point pairs from small to large, and selecting the largest point and the smallest point, namely the lowest point and the highest point of the short obstacle.
In summary, the invention provides a method for detecting short obstacles by an automatic parking looking-around camera, which includes collecting images around a vehicle by the vehicle looking-around camera, labeling short obstacles in the images with targets, constructing a convolutional neural network structure training model to train data labeled with the targets respectively to obtain a set of target pixel points of the short obstacles, discretizing the set of target pixel points of the short obstacles, selecting key points, calculating feature descriptors of the key points, matching the key points of the upper and lower frames by using the feature descriptors, screening out successfully matched key points, calculating a position difference of a point pair Q in the upper and lower images in a successful matching manner by combining vehicle motion information, calculating the height of the short obstacles according to the distance difference, thereby effectively detecting the type, position and height information of the short obstacles, effective information is provided for automatic parking.
Drawings
Fig. 1 is a schematic diagram of a method for detecting short obstacles by using an automatic parking all-round camera according to the invention.
Fig. 2 is a graph of the effect of the test using the method described in body 1.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention provides a method for detecting a short obstacle by an automatic parking looking-around camera, comprising the following steps:
s1, acquiring images around the vehicle through the vehicle-mounted all-round-looking camera, and carrying out target marking on short obstacles in the images;
specifically, a semantic segmentation labeling method is adopted to label the color of a background, a ground lock, a wheel block, a road edge, an ice cream cone, a parking vertical rod and other low obstacles in sequence according to the outline of a target.
And marking the types of the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles in sequence according to the 2D frame by adopting a 2D target marking method.
S2: constructing a convolutional neural network structure training model to perform semantic segmentation training and 2D target training on the data labeled by the target respectively;
the method specifically comprises the following steps: the semantic segmentation training process comprises the following steps: and acquiring a semantic segmentation labeling training set, wherein the training set comprises a sample image and color labeling labels, and generating a mask image corresponding to the sample image by the color labeling labels according to classification information, wherein the sample image and the mask image keep the same size. Inputting the sample images and the mask images of the training set into a pre-constructed semantic segmentation model to obtain the classification probability corresponding to each pixel of the sample images; and updating parameters of the semantic segmentation model based on the classification probability and the mask map to obtain the trained semantic segmentation model.
The 2D target training process is as follows: and acquiring a 2D target sample training set, and processing the image by adopting a target detection network to obtain a 2D target detection result of the sample image. The target detection network comprises a backbone network, a feature convolution layer, a maximum pooling network, a full connection layer and a regional convolution neural network RCNN. And (3) a training process of the 2D target detection network, namely after space search is carried out in the image according to the 2D target detection network, carrying out iterative update after residual errors are carried out with the labeled 2D frame, and obtaining an optimal training weight result.
S3: selecting a pixel point set with the semantic segmentation classification probability in the 2D frame being the same as the maximum probability of the semantic segmentation result according to the 2D target information obtained by training, wherein the pixel point set is the pixel point set of the same low obstacle; discretizing the identified low and short obstacle target pixel point set, selecting key points, and calculating feature descriptors of the key points;
s4: matching key points of an upper frame and a lower frame by using the feature descriptors, and screening out the key points which are successfully matched;
specifically, the Local feature of each key point is calculated by using a Local feature descriptor (compressed Efficient Binary Local Image descriptor), so as to obtain Local texture information of each key point, which is expressed as a Binary digital vector V in different directions on the Image:
Figure 93810DEST_PATH_IMAGE001
where n is the dimension of the binary vector v.
And establishing a corresponding relation according to the descriptors of the two frames of images. Each descriptor in the first frame image is compared with all descriptors in the second frame image using a brute force solution algorithm, and matching is performed using hamming distances, i.e., different hamming distances are calculated between the digitized vectors of each pair of descriptors.
The calculation process of the Hamming distance is to count the same or different numbers in the two groups of binary digitalized vectors, and the Hamming weight w is calculated according to the same number m.
Figure 552735DEST_PATH_IMAGE002
When w > 0.8, the two groups of points are considered to be matched, and the matched points are saved as point pairs Q.
S5: when the vehicle moves, the position difference of the point pair Q which is successfully matched in the upper image and the lower image is calculated by combining the vehicle movement information, and the height of a short obstacle is calculated according to the distance difference.
Specifically, the interval time between the upper and lower images is
Figure 444468DEST_PATH_IMAGE003
Calculating displacements in the lateral and longitudinal directions and displacements in the longitudinal direction from wheel speed information in the motion of the vehicle
Figure 97166DEST_PATH_IMAGE004
And
Figure 873492DEST_PATH_IMAGE005
the formula is as follows:
Figure 295246DEST_PATH_IMAGE006
Figure 482514DEST_PATH_IMAGE007
wherein the vehicle has a velocity vx in the lateral direction and vy in the longitudinal direction; the displacement of the two sets of points in the image is
Figure 622508DEST_PATH_IMAGE008
And
Figure 202525DEST_PATH_IMAGE009
the position difference equation is:
Figure 478786DEST_PATH_IMAGE010
in the moving process of the vehicle, if the point pair successfully matched is a point on the ground, no position difference exists; if the distance difference exists, the point pair Q on the matching is not the point on the ground, the relation between the height and the position difference of the point is a linear relation, and the formula is as follows:
Figure 977900DEST_PATH_IMAGE011
wherein a is a weight coefficient.
The center position coordinate of the point pair Q in the current top view around is
Figure 965710DEST_PATH_IMAGE012
Combining the height H of the point, and calculating the real position of the point according to the similar triangle principle
Figure 208472DEST_PATH_IMAGE013
(ii) a And sorting the height information of all the matching point pairs from small to large, and selecting the largest point and the smallest point, namely the lowest point and the highest point of the short obstacle.
Preferably, the above scheme is a process for calculating the height and position of the stationary target, and is also applicable to the moving target after motion compensation, wherein the motion relation of the moving target needs to be estimated in advance.
Fig. 2 shows the effect graph after the test by the method of the present invention, wherein, in the graph, 0.0000(mm) and 717.6146(mm) are the minimum height and the maximum height of the obstacle, and the positions of the minimum height and the maximum height in the looking-around image.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present invention should be defined by the appended claims.

Claims (10)

1. A method for detecting short obstacles by an automatic parking all-round camera is characterized by comprising the following steps:
s1, acquiring images around the vehicle through the vehicle-mounted all-round-looking camera, and carrying out target marking on short obstacles in the images;
s2: constructing a convolutional neural network structure training model to perform semantic segmentation training and 2D target training on the data labeled by the target respectively;
s3: selecting a pixel point set with the semantic segmentation classification probability in the 2D frame being the same as the maximum probability of the semantic segmentation result according to the 2D target information obtained by training, wherein the pixel point set is the pixel point set of the same low obstacle; discretizing the identified low and short obstacle target pixel point set, selecting key points, and calculating feature descriptors of the key points;
s4: matching key points of an upper frame and a lower frame by using the feature descriptors, and screening out the key points which are successfully matched;
s5: when the vehicle moves, the position difference of the point pair Q which is successfully matched in the upper image and the lower image is calculated by combining the vehicle movement information, and the height of a short obstacle is calculated according to the distance difference.
2. The method for detecting a short obstacle with an automatic parking surround camera according to claim 1, wherein said S1 further includes:
sequentially marking the background, the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles with colors by adopting a semantic segmentation marking method according to the outline of the target;
and marking the types of the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles in sequence according to the 2D frame by adopting a 2D target marking method.
3. The automatic parking surround camera short obstacle detection method according to claim 2,
acquiring a semantic segmentation annotation training set, wherein the semantic segmentation annotation training set comprises a sample image and color annotation labels, generating a mask image corresponding to the sample image according to classification information by using the color annotation labels, and keeping the sample image and the mask image in the same size; inputting a sample image and a mask image of a semantic segmentation labeling training set into a pre-constructed semantic segmentation model to obtain classification probability corresponding to each pixel of the sample image; and updating parameters of the semantic segmentation model based on the classification probability and the mask map to obtain the trained semantic segmentation model.
4. The automatic parking surround camera short obstacle detection method according to claim 2,
acquiring a 2D target sample training set, and processing the image by adopting a target detection network to obtain a 2D target detection result of the sample image; the target detection network comprises one or more of a backbone network, a characteristic convolution layer, a maximum pooling network, a full connection layer and a regional convolution neural network RCNN; after space search is carried out in the image according to the 2D target detection network, residual calculation is carried out on the 2D target detection network and the labeled 2D frame, and then iterative updating is carried out to obtain an optimal training weight result.
5. The method for detecting short obstacles by using an automatic parking looking-around camera according to claim 1, wherein the selecting a key point further comprises: and sorting all classified pixel coordinates from small to large according to the Y value, wherein the maximum value and the minimum value of each row of X coordinates after sorting are edge points, and all screened edge points are used as key points.
6. The method for detecting short obstacles by using an automatic parking looking-around camera in accordance with claim 1, wherein said calculating feature descriptors of key points further comprises:
calculating the local feature of each key point by using the local feature descriptor BEBLID to obtain the local texture information of each key point, expressing the local texture information as binary digitalization vectors V in different directions on the image,
Figure 540932DEST_PATH_IMAGE001
where n is the dimension of the binary vector v.
7. The method for detecting a short obstacle with an automatic parking surround camera according to claim 1, wherein said S4 further includes:
establishing a corresponding relation according to the descriptors of the two frames of images, calculating different Hamming distances between the numerical vectors of each pair of descriptors, wherein the Hamming distances are calculated by counting the same or different numbers in two groups of binary numerical vectors, and calculating a Hamming weight w according to the same number m:
Figure 337725DEST_PATH_IMAGE002
when w is>When 0.8, the two sets of points are considered to be matched, and the matched points are stored as a point pair Q.
8. The method for detecting a short obstacle with an automatic parking surround camera according to claim 1, wherein said S5 further includes:
the interval between the upper and lower images is
Figure 990423DEST_PATH_IMAGE003
Calculating displacements in the lateral and longitudinal directions and displacements in the longitudinal direction from wheel speed information in the motion of the vehicle
Figure 625804DEST_PATH_IMAGE004
And
Figure 188503DEST_PATH_IMAGE005
the formula is as follows:
Figure 516716DEST_PATH_IMAGE006
Figure 532077DEST_PATH_IMAGE007
wherein the vehicle has a velocity vx in the lateral direction and vy in the longitudinal direction; the displacement of the two sets of points in the image is
Figure 971148DEST_PATH_IMAGE008
And
Figure 388354DEST_PATH_IMAGE009
the position difference equation is:
Figure 887469DEST_PATH_IMAGE010
9. the method for detecting a short obstacle with an automatic parking surround camera according to claim 8, further comprising: in the moving process of the vehicle, if the point pair successfully matched is a point on the ground, no position difference exists; if the distance difference exists, the point pair Q on the matching is not the point on the ground, the relation between the height and the position difference of the point is a linear relation, and the formula is as follows:
Figure 891590DEST_PATH_IMAGE011
wherein a is a weight coefficient.
10. The method for detecting a short obstacle with an automatic parking surround camera according to claim 9, further comprising: the center position coordinate of the point pair Q in the current top view around is
Figure 868774DEST_PATH_IMAGE012
Combining the height H of the point, and calculating the real position of the point according to the similar triangle principle
Figure 406065DEST_PATH_IMAGE013
(ii) a And sorting the height information of all the matching point pairs from small to large, and selecting the largest point and the smallest point, namely the lowest point and the highest point of the short obstacle.
CN202110737067.3A 2021-06-30 2021-06-30 Method for detecting short obstacles by automatic parking all-round looking camera Pending CN113569652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110737067.3A CN113569652A (en) 2021-06-30 2021-06-30 Method for detecting short obstacles by automatic parking all-round looking camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110737067.3A CN113569652A (en) 2021-06-30 2021-06-30 Method for detecting short obstacles by automatic parking all-round looking camera

Publications (1)

Publication Number Publication Date
CN113569652A true CN113569652A (en) 2021-10-29

Family

ID=78163214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110737067.3A Pending CN113569652A (en) 2021-06-30 2021-06-30 Method for detecting short obstacles by automatic parking all-round looking camera

Country Status (1)

Country Link
CN (1) CN113569652A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419604A (en) * 2022-03-28 2022-04-29 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN114407901A (en) * 2022-02-18 2022-04-29 北京小马易行科技有限公司 Control method and device for automatic driving vehicle and automatic driving system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114407901A (en) * 2022-02-18 2022-04-29 北京小马易行科技有限公司 Control method and device for automatic driving vehicle and automatic driving system
CN114407901B (en) * 2022-02-18 2023-12-19 北京小马易行科技有限公司 Control method and device for automatic driving vehicle and automatic driving system
CN114419604A (en) * 2022-03-28 2022-04-29 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
CN110175576B (en) Driving vehicle visual detection method combining laser point cloud data
Behley et al. Laser-based segment classification using a mixture of bag-of-words
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
JP5670413B2 (en) Road use vulnerable person protection system
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
CN115372958A (en) Target detection and tracking method based on millimeter wave radar and monocular vision fusion
CN109919026B (en) Surface unmanned ship local path planning method
JP2016062610A (en) Feature model creation method and feature model creation device
CN114022830A (en) Target determination method and target determination device
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN111340855A (en) Road moving target detection method based on track prediction
CN115049700A (en) Target detection method and device
CN113569652A (en) Method for detecting short obstacles by automatic parking all-round looking camera
Limmer et al. Robust deep-learning-based road-prediction for augmented reality navigation systems at night
CN107031661A (en) A kind of lane change method for early warning and system based on blind area camera input
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN105447881A (en) Doppler-based segmentation and optical flow in radar images
CN106228570A (en) A kind of Truth data determines method and apparatus
CN105574892A (en) Doppler-based segmentation and optical flow in radar images
CN112270694B (en) Method for detecting urban environment dynamic target based on laser radar scanning pattern
Lyu et al. Sea-surface object detection based on electro-optical sensors: A review
US20220129685A1 (en) System and Method for Determining Object Characteristics in Real-time
El Jaafari et al. A novel approach for on-road vehicle detection and tracking
CN116597122A (en) Data labeling method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination