CN109784315A - Tracking detection method, device, system and the computer storage medium of 3D barrier - Google Patents

Tracking detection method, device, system and the computer storage medium of 3D barrier Download PDF

Info

Publication number
CN109784315A
CN109784315A CN201910126019.3A CN201910126019A CN109784315A CN 109784315 A CN109784315 A CN 109784315A CN 201910126019 A CN201910126019 A CN 201910126019A CN 109784315 A CN109784315 A CN 109784315A
Authority
CN
China
Prior art keywords
barrier
feature vector
image
current barrier
obstacles object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910126019.3A
Other languages
Chinese (zh)
Other versions
CN109784315B (en
Inventor
杜新新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Wind Map Intelligent Technology Co Ltd
Original Assignee
Suzhou Wind Map Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Wind Map Intelligent Technology Co Ltd filed Critical Suzhou Wind Map Intelligent Technology Co Ltd
Priority to CN201910126019.3A priority Critical patent/CN109784315B/en
Publication of CN109784315A publication Critical patent/CN109784315A/en
Application granted granted Critical
Publication of CN109784315B publication Critical patent/CN109784315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of for the tracking detection method of 3D barrier, device, system and computer storage medium.The tracking detection method comprise determining that from the 3D point cloud of present frame and 2D image detection to current barrier the 2nd 2D feature vector corresponding to the 2D image district in 2D image;Each of 2nd 2D feature vector is compared with each of barrier feature vector concentration the first 2D feature vector, to obtain multiple difference feature vectors, wherein barrier feature vector concentrates the first 2D feature vector for being stored with the previous obstacles object that characterization is previously detected;It executes deep learning to multiple difference feature vectors to calculate, to generate corresponding multiple probability values, each probability value indicates a current barrier and a previous obstacles object is the probability of same barrier;And the corresponding relationship between current barrier and previous obstacles object is determined according to multiple probability values, to realize that barrier tracks.

Description

Tracking detection method, device, system and the computer storage medium of 3D barrier
Technical field
The present invention relates to barrier tracking monitoring technology more particularly to a kind of tracking detection method for 3D barrier, It is a kind of for the tracing and detecting apparatus of 3D barrier, a kind of tracing detection system for 3D barrier and a kind of computer Storage medium.
Background technique
Existing obstacle detection technology is based primarily upon camera to carry out 2D detection of obstacles, or is merely based on 3D Laser radar carries out 3D detection of obstacles.
In the application of automatic driving vehicle, 2D bounding box is only planning unit and decision package provides limited letter Breath, however for automatic driving vehicle, it is also necessary to including vehicle dimension, driving direction and other vehicles with from vehicle Relative position etc. in detail and accurate vehicle 3D information, to carry out decision-making.In addition, although the deep learning based on 2D image Technology has shown the performance of high measurement accuracy in the detection of obstacles application of vehicle, but it can not support velocity estimation, And car speed estimation is for being essential with time leading planning algorithm and barrier tracking monitoring technology.
Video camera and laser radar (Light Detection and Ranging, LiDAR) scanner are automatic Pilot vehicles Most common two kinds of sensors in sensing system.Due to perspective distortion, automatic driving vehicle can not be obtained using only video camera System needs accurate 3D information to be used.Even if the depth of field estimation of acquired image still can not reach using three-dimensional camera system To satisfactory performance level.
64 common beam laser radars scan every time can easily generate more than 100000 points, include vehicle to obtain The accurate 3D information such as size, driving direction and other vehicles and relative position from vehicle.However as the expansion in detection space Greatly, the scale and resolution ratio of required laser radar point cloud can increase in cube.Due to the limitation of memory and calculating time, lead to Cross the thorough application searches algorithm of entire point cloud or convolution algorithm be it is infeasible, tracking accuracy rate is extremely restricted, to lead Cause missing inspection erroneous detection.Therefore, handle laser radar point cloud significant challenge be reduce computation burden while, keep 3d space figure Case and information it is accurate.
To sum up, this field needs the barrier tracking of a kind of the 3d space pattern that can efficiently obtain high quality and information Detection technique, to improve the barrier tracing detection efficiency and accuracy rate of automatic driving vehicle.
Summary of the invention
A brief summary of one or more aspects is given below to provide to the basic comprehension in terms of these.This general introduction is not The extensive overview of all aspects contemplated, and be both not intended to identify critical or decisive element in all aspects also non- Attempt to define the range in terms of any or all.Its unique purpose is to provide the one of one or more aspects in simplified form A little concepts are with the sequence for more detailed description given later.
In order to efficiently obtain the 3d space pattern and information of high quality, to improve the barrier of automatic driving vehicle Tracing detection efficiency and accuracy rate, the present invention provides a kind of tracking detection method, a kind of 3D that is used for for 3D barrier to hinder Hinder the tracing and detecting apparatus of object, a kind of tracing detection system and a kind of computer storage medium for 3D barrier.
The above-mentioned tracking detection method for 3D barrier provided by the invention, for the barrier that detects execute with Track, the tracking detection method include:
Determine from the 3D point cloud of present frame and 2D image detection to 2D of at least one current barrier in 2D image At least one the 2nd 2D feature vector corresponding to image district;
By each of each of at least one described the 2nd 2D feature vector and barrier feature vector concentration first 2D feature vector is compared, to obtain multiple difference feature vectors, wherein barrier feature vector concentration is stored with table Levy the first 2D feature vector of at least one the previous obstacles object being previously detected;
Deep learning is executed to the multiple difference feature vector to calculate, it is each general to generate corresponding multiple probability values Rate value indicates a current barrier and a previous obstacles object is the probability of same barrier;And
At least one described current barrier and at least one described previous obstacles object are determined according to the multiple probability value Between corresponding relationship, with realize barrier track.
Preferably, in above-mentioned tracking detection method provided by the invention, determine at least one described the 2nd 2D feature to Amount may include: to execute feature extraction respectively to 2D image district of at least one the described current barrier in the 2D image, To generate at least one corresponding 2D feature vector using as at least one described the 2nd 2D feature vector.
Preferably, in above-mentioned tracking detection method provided by the invention, the execution feature extraction can be wrapped further It includes: in the image overall depth characteristic layer of the deep learning frame of 2D obstacle recognition, executing the pond ROI for each 2D image district Change operation, to generate at least one described the 2nd 2D feature vector.
Optionally, in above-mentioned tracking detection method provided by the invention, can also include:
By at least one the 2D feature vector extracted from the 2D image district in the 2D image be input to convolutional layer and its Associated line rectification layer, and full connection layer and its associated line rectification layer execute calculating, to generate at least one increasing Strong 2D feature vector is using as at least one described the 2nd 2D feature vector.
Optionally, in above-mentioned tracking detection method provided by the invention, depth is executed to the multiple difference feature vector It may include: that each difference feature vector is inputted two full connection layers to execute calculatings that degree study, which calculates, with acquisition and described more The corresponding the multiple probability value of a difference feature vector.
Optionally, described that institute is determined according to the multiple probability value in above-mentioned tracking detection method provided by the invention The corresponding relationship between at least one current barrier and at least one described previous obstacles object is stated, may include:
For each current barrier, will there is a previous obstacles object of the most probable value for being higher than threshold value and deserve therewith Preceding barrier matches;And
Matched current barrier and previous obstacles object are considered as same barrier, concentrated in the barrier feature vector First 2D feature vector of the same barrier of correspondence is updated to corresponding 2nd 2D feature vector, the current obstacle that will newly recognize 2nd 2D feature vector of object is added to the barrier feature vector and concentrates.
Preferably, described by matched current barrier and previous in above-mentioned tracking detection method provided by the invention Barrier, which is considered as same barrier, to be specifically included:
3D position information confirming is executed to each pair of current barrier and previous obstacles object of successful match, it is logical in response to confirming It crosses, then the matched current barrier and previous obstacles object is considered as same barrier;Otherwise it will not pass through the current barrier of confirmation Hinder the 2nd 2D feature vector of object to be added to the barrier feature vector to concentrate.
Preferably, in above-mentioned tracking detection method provided by the invention, each pair of current obstacle to successful match Object and previous obstacles object execute 3D position information confirming
According to the position of the previous obstacles object, movement speed and potential steering, the position of observation point, movement speed and potential It turns to and the time difference determines a spatial dimension, be in the spatial dimension in response to the current barrier and be then identified through, it is no Then confirm failure.
Optionally, in above-mentioned tracking detection method provided by the invention, can also include:
It is true according to the change in location and time difference that are considered as between the current barrier of same barrier and previous obstacles object The speed of the fixed barrier.
According to another aspect of the present invention, a kind of tracing and detecting apparatus for 3D barrier is also provided herein.
Above-mentioned tracing and detecting apparatus provided by the invention, for executing tracking, the tracking inspection to the barrier detected Surveying device includes:
Memory, the first 2D that at least one previous obstacles object that characterization is previously detected is stored in the memory are special Levy vector;And
It is coupled to the processor of the memory, the processor is configured to:
Determine from the 3D point cloud of present frame and 2D image detection to 2D of at least one current barrier in 2D image At least one the 2nd 2D feature vector corresponding to image district;
By each of each of at least one described the 2nd 2D feature vector and barrier feature vector concentration First 2D feature vector is compared, to obtain multiple difference feature vectors;
Deep learning is executed to the multiple difference feature vector to calculate, it is each general to generate corresponding multiple probability values Rate value indicates a current barrier and a previous obstacles object is the probability of same barrier;
And
At least one described current barrier and at least one described previous obstacles object are determined according to the multiple probability value Between corresponding relationship, with realize barrier track.
Preferably, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
Feature extraction is executed respectively to 2D image district of at least one the described current barrier in the 2D image, with life At at least one corresponding 2D feature vector, using as at least one described the 2nd 2D feature vector.
Preferably, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
In the image overall depth characteristic layer of the deep learning frame of 2D obstacle recognition, executed for each 2D image district ROI pondization operation, to generate at least one described the 2nd 2D feature vector.
Optionally, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
By at least one the 2D feature vector extracted from the 2D image district in the 2D image be input to convolutional layer and its Associated line rectification layer, and full connection layer and its associated line rectification layer execute calculating, to generate at least one increasing Strong 2D feature vector, using as at least one described the 2nd 2D feature vector.
Optionally, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
Executing deep learning calculating to the multiple difference feature vector includes: that each difference feature vector is inputted two A full layer executes calculating, to obtain the multiple probability value corresponding with the multiple difference feature vector.
Optionally, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
For each current barrier, will there is a previous obstacles object of the most probable value for being higher than threshold value and deserve therewith Preceding barrier matches;And
Matched current barrier and previous obstacles object are considered as same barrier, concentrated in the barrier feature vector First 2D feature vector of the same barrier of correspondence is updated to corresponding 2nd 2D feature vector, the current obstacle that will newly recognize 2nd 2D feature vector of object is added to the barrier feature vector and concentrates.
Preferably, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
3D position information confirming is executed to each pair of current barrier and previous obstacles object of successful match, it is logical in response to confirming It crosses, then the matched current barrier and previous obstacles object is considered as same barrier;Otherwise it will not pass through the current barrier of confirmation Hinder the 2nd 2D feature vector of object to be added to the barrier feature vector to concentrate.
Preferably, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
According to the position of the previous obstacles object, movement speed and potential steering, the position of observation point, movement speed and potential It turns to and the time difference determines a spatial dimension, be in the spatial dimension in response to the current barrier and be then identified through, it is no Then confirm failure.
Optionally, in above-mentioned tracing and detecting apparatus provided by the invention, the processor can be further configured to:
It is true according to the change in location and time difference that are considered as between the current barrier of same barrier and previous obstacles object The speed of the fixed barrier.
According to another aspect of the present invention, a kind of tracing detection system for 3D barrier is also provided herein.
Above-mentioned tracing detection system provided by the invention, comprising:
Image capture device, for obtaining 2D image;
Point cloud data capture device, for obtaining 3D point cloud;And
Any one of the above tracing and detecting apparatus.
According to another aspect of the present invention, a kind of computer storage medium is also provided herein, is stored thereon with computer When the computer program is executed by processor, tracing detection of any one of the above for 3D barrier is may be implemented in program The step of method.
Detailed description of the invention
After the detailed description for reading embodiment of the disclosure in conjunction with the following drawings, it better understood when of the invention Features described above and advantage.In the accompanying drawings, each component is not necessarily drawn to scale, and has similar correlation properties or feature Component may have same or similar appended drawing reference.
Fig. 1 is the flow diagram for the tracking detection method for 3D barrier that one embodiment of the present of invention provides.
Fig. 2 is the method flow schematic diagram for the determination spatial dimension R that one embodiment of the present of invention provides.
Fig. 3 is the structural schematic diagram for the tracing and detecting apparatus for 3D barrier that one embodiment of the present of invention provides.
Fig. 4 is the structural schematic diagram for the tracing detection system for 3D barrier that one embodiment of the present of invention provides.
Appended drawing reference
101-104 is used for the step of tracking detection method of 3D barrier;
30 are used for the tracing and detecting apparatus of 3D barrier;
31 memories;
32 processors;
40 are used for the tracing detection system of 3D barrier;
41 image capture devices;
42 point cloud data capture devices.
Specific embodiment
Embodiments of the present invention are illustrated by particular specific embodiment below, those skilled in the art can be by this specification Revealed content is understood other advantages and efficacy of the present invention easily.Although description of the invention will combine preferred embodiment It introduces together, but this feature for not representing the invention is only limitted to the embodiment.On the contrary, being invented in conjunction with embodiment The purpose of introduction is to be possible to the other selections extended or transformation to cover based on claim of the invention.In order to mention For that will include many concrete details in depth understanding of the invention, being described below.The present invention can also be thin without using these Section is implemented.In addition, in order to avoid confusion or obscuring emphasis of the invention, some details will be omitted in the de-scription.
In the description of the present invention, it should be noted that unless otherwise clearly defined and limited, term " installation ", " phase Even ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can To be mechanical connection, it is also possible to be electrically connected;It can be directly connected, can also can be indirectly connected through an intermediary Connection inside two elements.For the ordinary skill in the art, above-mentioned term can be understood at this with concrete condition Concrete meaning in invention.
In addition, in the following description used in "upper", "lower", "left", "right", "top", "bottom", "horizontal", " hang down It directly " should be understood orientation depicted in this section and relevant drawings.The term of this relativity explanation merely for convenience With not representing device that it is described need to be manufactured or be operated with particular orientation, therefore should not be construed as to of the invention Limitation.
It is appreciated that, although term " first ", " second ", " third " etc. can be used herein to describe various assemblies, area Domain, layer and/or part, these components, regions, layers, and/or portions should not be limited by these terms, and these terms are intended merely to Distinguish different components, regions, layers, and/or portions.Therefore, first assembly discussed below, regions, layers, and/or portions can be It is referred to as the second component, regions, layers, and/or portions in the case where without departing from some embodiments of the invention.
In order to efficiently obtain the 3d space pattern and information of high quality, to improve the barrier of automatic driving vehicle Tracing detection efficiency and accuracy rate hinder the present invention provides the embodiment of the tracking detection method for 3D barrier, for 3D Hinder the tracing and detecting apparatus of object embodiment, for 3D barrier tracing detection system embodiment and computer storage The embodiment of medium.
As shown in Figure 1, the above-mentioned tracking detection method for 3D barrier provided in this embodiment, can be used for detection The barrier arrived executes tracking, which may include:
101: determine from the 3D point cloud of present frame and 2D image detection at least one current barrier in 2D image 2D image district corresponding at least one the 2nd 2D feature vector.
In the application of automatic driving vehicle, above-mentioned barrier can include but is not limited to other vehicles on road.? In the driving process of automatic driving vehicle, these obstacle vehicle objects are opposite constantly to change from the relative distance of vehicle, therefore needs These obstacle vehicle objects are accurately and efficiently detected, and carry out real-time tracking to carry out the formulation of Driving Decision-making.
The 2D image of above-mentioned barrier can be taken pictures by using camera and be acquired to obtain.Above-mentioned 3D point cloud can pass through While being taken pictures using camera, obtained using laser radar scanning peripheral vehicle environment.According to laser radar and camera shooting The relative positional relationship of head and the inner parameter (focal length, focal position etc.) of camera, each barrier that laser radar scanning arrives Corresponding image district can be found in corresponding 2D image by hindering the bounding box of object.
Those skilled in the art can be by technological means that is existing or will having, according to a frame 3D point cloud and its accordingly 2D image, accurately and efficiently detect current time from each barrier around vehicle, and including but not limited to vehicle ruler The 3D information accurate in detail such as very little, driving direction and relative position.By the way that 3D point cloud is mapped into 2D image, the skill of this field The barrier that art personnel can confirmly detect corresponding 2D image district in 2D image.For example, can be by barrier in 3D point cloud Method of each vertex of bounding box according to cloud into image, projects in 2D image, to determine the profile of 2D image district.
It will be understood to those skilled in the art that the above-mentioned mode for mapping 3D point cloud into 2D image, only a kind of true Determine the concrete scheme of 2D image district.In other embodiments, those skilled in the art can also be determined using other modes Above-mentioned 2D image district.
Those skilled in the art can 2D image by the current barrier that detects to each, in 2D image Area executes the mode of feature extraction respectively, the 2D feature vector of corresponding number is generated, so that it is determined that at least one above-mentioned the 2nd 2D Feature vector.The 2D feature vector can be a various dimensions vector, for characterizing correlation of the obstacle vehicle object in 2D image Information.
Above-mentioned execution feature extraction may include: the image overall depth spy in the deep learning frame of 2D obstacle recognition Sign layer (such as: the Conv5_3 layer in faster rcnn deep learning frame), ROI is executed for each 2D image district (Region of Interest) pondization operation, to generate above-mentioned 2nd 2D feature vector.
As an example, the setting parameter of above-mentioned Faster-Rcnn frame is as shown in the table:
Table 1
Above-mentioned image overall depth characteristic layer can be a kind of deep learning layer, be the known skill of those skilled in the art Art means are recorded in http://caffe.berkeleyvision.org/tutorial/layers.html in detail.
It will be understood to those skilled in the art that above-mentioned conv5_3 layers is a kind of specific image overall depth feature Layer.In other embodiments, corresponding to using other depths such as fast rcnn deep learning frame, MS-Cnn deep learning frames Learning framework is spent, those skilled in the art can also hold in other image overall depth characteristic layers for each 2D image district Row ROI pondization operation, to generate the corresponding 2D feature vector of each 2D image district.
Optionally, the above-mentioned 2D feature that those skilled in the art can also will extract from the 2D image district in 2D image Vector is input to convolutional layer and its associated line rectification layer (channel:512, pad 1, kernel 3), and joins layer entirely (Fully connected layer) and its associated line rectification layer (channel:256) execute calculating, to generate correspondence The 2D feature vector of the enhancing of number, and as above-mentioned 2nd 2D feature vector.The 2D feature vector of above-mentioned enhancing can be with Further include some features for object tracking.
Above-mentioned convolutional layer (convolution layer) can be made of several convolution units, the ginseng of each convolution unit Number can optimize to obtain by back-propagation algorithm.The purpose of convolution algorithm includes but is not limited to extract the difference spy of input Sign.First layer convolutional layer may can only extract some rudimentary features (such as: the levels such as edge, lines and angle), and more multireel 3D feature extraction convolutional neural networks composed by lamination then can from low-level features the more complicated feature of iterative extraction.
102: by each of each of at least one the 2nd 2D feature vector and barrier feature vector concentration first 2D feature vector is compared, and to obtain multiple difference feature vectors, wherein barrier feature vector concentration is stored with characterization first Before the first 2D feature vector of at least one previous obstacles object that detects.
As described above, obstacle vehicle object is opposite can not from the relative distance of vehicle in the driving process of automatic driving vehicle It is disconnected to change, it is therefore desirable to these obstacle vehicle objects accurately and efficiently be detected, and carry out real-time tracking to be driven Sail the formulation of decision.
At least one barrier that those skilled in the art can arrive previous each frame 3D point cloud and 2D image detection At least one the 2nd 2D feature vector be stored in barrier set of eigenvectors, using as the first 2D feature vector.
First concentrated by the 2nd 2D feature vector and barrier feature vector of the barrier for detecting present frame 2D feature vector compares (such as: subtracting each other), can be obtained the difference feature vector of corresponding number, to be characterized in 2 frame 3D points The variation that the relevant information of barrier is occurred in the time difference of cloud and 2D image.The difference feature vector can be and 2D feature The various dimensions vector of vector same latitude.
It will be understood to those skilled in the art that it may include the of multiple and different barriers that barrier feature vector, which is concentrated, One 2D feature vector may also respond to judge some barrier and barrier spy that a later frame 3D point cloud and 2D image detection arrive The barrier levied in vector set is the same barrier, by the 2nd 2D of the barrier in a later frame 3D point cloud and 2D image Feature vector is stored in barrier set of eigenvectors, updates the first 2D feature vector of the barrier, for frame 3D point cloud later and 2D image detection to barrier compare.
103: deep learning is executed to multiple difference feature vectors and is calculated, it is each general to generate corresponding multiple probability values Rate value indicates a current barrier and a previous obstacles object is the probability of same barrier.
Above-mentioned deep learning calculating may include that each difference feature vector is inputted to two full connection layer (first layers Channel 256, second layer channel 2) calculating is executed, to obtain multiple probability corresponding with multiple difference feature vectors Value.It is calculated by above-mentioned deep learning, each current barrier can be obtained and each previous obstacles object is same obstacle The probability of object.
104: determining pair between at least one current barrier and at least one previous obstacles object according to multiple probability values It should be related to, to realize that barrier tracks.
By default one determine current barrier and previous obstacles object whether be same barrier threshold value, the skill of this field Art personnel can be directed to each current barrier, by therewith have higher than the threshold value most probable value a previous obstacles object with The current barrier matches;And
Matched current barrier and previous obstacles object are considered as same barrier, concentrating in barrier feature vector will be right The first 2D feature vector of same barrier is answered to be updated to corresponding 2nd 2D feature vector;Or will be unmatched, i.e., it newly recognizes Current barrier the 2nd 2D feature vector be added to barrier feature vector concentration.
Preferably, those skilled in the art can also each pair of current barrier to successful match and previous obstacles object into One step executes 3D position information confirming.It is considered as in response to being identified through, then by the matched current barrier and previous obstacles object Same barrier;Otherwise barrier set of eigenvectors will be added to by the 2nd 2D feature vector of the current barrier confirmed In, the barrier new as one.
Above-mentioned each pair of current barrier and previous obstacles object to successful match executes 3D position information confirming
According to the position of the previous obstacles object, movement speed and potential steering, the position of observation point, movement speed and potential It turns to and the time difference determines a spatial dimension.It is in the spatial dimension in response to the current barrier, then it is assumed that the obstacle Object may reach a new position within the time difference, be identified through;Otherwise it is assumed that the barrier can not within the time difference Reach the new position, confirmation failure.
Specifically, above-mentioned spatial dimension R can be determined by method as shown in Figure 2.
As illustrated in fig. 2, it is assumed that observation point (from vehicle) is originally in O point;The previous obstacles object position is A point, and with The most fast travel speed of 135km/h, along its car body (B-C) direction running.
According to the maximum speed and above-mentioned time difference, the maximum distance B point and C point that it may drive to can be found.If Within the time difference, also travelled from vehicle along the positive direction of z-axis with prestissimo, then the barrier is relative to observation point current The opposed area that may be on time point can be indicated by polygon BCDE.
In view of within the time difference, barrier may to the left or bend to right, rather than forward along car body direction Traveling.In order to compensate for turning, straight line BC and DE can be translated a distance d (0.5 meter) respectively, obtain B ' C ' and D ' E '.This When, the minimum rectangle that can cover B ' C ' D ' E ' is B ' C " D ' E ".
Likewise, in view of observation vehicle may also can turn.It, can be by straight line C " B ' and D ' in order to compensate for this phenomenon E " rotates by a certain angle θ (0.05) around O point, to obtain straight line FG and KH, so that it is determined that above-mentioned spatial dimension R.
It will be understood to those skilled in the art that the method for the above-mentioned spatial dimension R of determination as shown in Figure 2, only a kind of Specific embodiment.In other embodiments, those skilled in the art can also determine above-mentioned space using other modes Range R.
Optionally, those skilled in the art can also further according to be considered as the current barrier of same barrier with Change in location and time difference between previous obstacles object determine the speed of the barrier, to carry out the planning for having the time leading The tracing detection of algorithm and barrier.
Although for simplify explain the above method is illustrated to and is described as a series of actions, it should be understood that and understand, The order that these methods are not acted is limited, because according to one or more embodiments, some movements can occur in different order And/or with from it is depicted and described herein or herein it is not shown and describe but it will be appreciated by those skilled in the art that other Movement concomitantly occurs.
According to another aspect of the present invention, a kind of implementation of tracing and detecting apparatus for 3D barrier is also provided herein Example.
As shown in figure 3, above-mentioned tracing and detecting apparatus 30 provided in this embodiment, can be used for holding the barrier detected Line trace.The detection device 30 may include memory 31, and be coupled to the processor 32 of the memory 31.
It can store the first 2D feature of at least one previous obstacles object that characterization is previously detected in above-mentioned memory Vector is compared for the 2nd 2D feature vector at least one current barrier, to judge whether same the two is Barrier.
Above-mentioned processor 32 may be configured to determine from the 3D point cloud of present frame and 2D image detection at least one Current barrier at least one the 2nd 2D feature vector corresponding to the 2D image district in 2D image;By this at least one second Each of 2D feature vector is compared with each of barrier feature vector concentration the first 2D feature vector, more to obtain A difference feature vector;It executes deep learning to multiple difference feature vectors to calculate, to generate corresponding multiple probability values, each Probability value indicates a current barrier and a previous obstacles object is the probability of same barrier;And it is determined according to multiple probability values Corresponding relationship between at least one current barrier and at least one previous obstacles object, to realize that barrier tracks.
It will be understood to those skilled in the art that the configuration mode of above-mentioned processor 32, only a kind of realize hinders for 3D Hinder the concrete scheme of the tracking detection method of object.In other embodiments, above-mentioned processor 32 can also be configured in realization State any one tracking detection method for 3D barrier.
According to another aspect of the present invention, a kind of reality of tracing detection system 40 for 3D barrier is also provided herein Apply example.
As shown in figure 4, said detecting system 40 may include the image capture device 41 for obtaining 2D image;For obtaining Take the point cloud data capture device 42 of 3D point cloud;And any one of the above tracing and detecting apparatus 30.Above-mentioned image capture device 41 can include but is not limited to camera and video camera.Above-mentioned point cloud data capture device 42 can include but is not limited to laser thunder It reaches.
According to another aspect of the present invention, a kind of embodiment of computer storage medium is also provided herein.
Computer program is stored in the computer storage medium.When the computer program is executed by processor, Ke Yishi The step of existing any one of the above is used for the tracking detection method of 3D barrier.
Those skilled in the art will further appreciate that, the various illustratives described in conjunction with the embodiments described herein Logic plate, module, circuit and algorithm steps can be realized as electronic hardware, computer software or combination of the two.It is clear Explain to Chu this interchangeability of hardware and software, various illustrative components, frame, module, circuit and step be above with Its functional form makees generalization description.Such functionality be implemented as hardware or software depend on concrete application and It is applied to the design constraint of total system.Technical staff can realize every kind of specific application described with different modes Functionality, but such realization decision should not be interpreted to cause departing from the scope of the present invention.
Electronic hardware, computer software or any combination thereof can be used to realize for the processor of this case description.Such processing Device, which is implemented as hardware or software, will depend on concrete application and is added to the overall design constraints of system.As an example, this Any part of the processor, processor that are presented in open or any combination available microprocessors of processor, microcontroller, Digital signal processor (DSP), field programmable gate array (FPGA), programmable logic device (PLD), state machine, gate are patrolled Volume, discrete hardware circuit and be configured to execute through the disclosure description various functions other suitable processing components To realize.Any part of the processor, processor that are presented in the disclosure or processor it is any combination of it is functional it is available by Software that microprocessor, microcontroller, DSP or other suitable platforms execute is realized.
The step of method or algorithm for describing in conjunction with embodiment disclosed herein, can be embodied directly in hardware, in by processor It is embodied in the software module of execution or in combination of the two.Software module can reside in RAM memory, flash memory, ROM and deposit Reservoir, eprom memory, eeprom memory, register, hard disk, removable disk, CD-ROM or known in the art appoint In the storage medium of what other forms.Exemplary storage medium is coupled to processor so that the processor can be from/to the storage Medium reads and writees information.In alternative, storage medium can be integrated into processor.Pocessor and storage media can It resides in ASIC.ASIC can reside in user terminal.In alternative, pocessor and storage media can be used as discrete sets Part is resident in the user terminal.
In one or more exemplary embodiments, described function can be in hardware, software, firmware, or any combination thereof Middle realization.If being embodied as computer program product in software, each function can be used as one or more item instructions or generation Code may be stored on the computer-readable medium or be transmitted by it.Computer-readable medium includes computer storage medium and communication Both media comprising any medium for facilitating computer program to shift from one place to another.Storage medium can be can quilt Any usable medium of computer access.It is non-limiting as example, such computer-readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disc storages, disk storage or other magnetic storage apparatus can be used to carrying or store instruction Or data structure form desirable program code and any other medium that can be accessed by a computer.Any connection is also by by rights Referred to as computer-readable medium.For example, if software is using coaxial cable, fiber optic cables, twisted pair, digital subscriber line (DSL) or the wireless technology of such as infrared, radio and microwave etc is passed from web site, server or other remote sources It send, then the coaxial cable, fiber optic cables, twisted pair, DSL or such as infrared, radio and microwave etc is wireless Technology is just included among the definition of medium.Disk (disk) and dish (disc) as used herein include compression dish (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc, which disk (disk) are often reproduced in a manner of magnetic Data, and dish (disc) with laser reproduce data optically.Combinations of the above should also be included in computer-readable medium In the range of.
Offer is to make any person skilled in the art all and can make or use this public affairs to the previous description of the disclosure It opens.The various modifications of the disclosure all will be apparent for a person skilled in the art, and as defined herein general Suitable principle can be applied to other variants without departing from the spirit or scope of the disclosure.The disclosure is not intended to be limited as a result, Due to example described herein and design, but should be awarded and principle disclosed herein and novel features phase one The widest scope of cause.

Claims (20)

1. a kind of tracking detection method for 3D barrier, for executing tracking, the tracking inspection to the barrier detected Survey method includes:
Determine from the 3D point cloud of present frame and 2D image detection to 2D image of at least one current barrier in 2D image At least one the 2nd 2D feature vector corresponding to area;
The first 2D of each of each of at least one described the 2nd 2D feature vector and barrier feature vector concentration is special Sign vector is compared to obtain multiple difference feature vectors, wherein barrier feature vector concentration had been stored with characterization previously First 2D feature vector of at least one the previous obstacles object detected;
It executes deep learning to the multiple difference feature vector to calculate to generate corresponding multiple probability values, each probability value refers to Show a current barrier and a previous obstacles object is the probability of same barrier;And
It is determined between at least one described current barrier and at least one described previous obstacles object according to the multiple probability value Corresponding relationship with realize barrier track.
2. tracking detection method as described in claim 1, which is characterized in that determine at least one described the 2nd 2D feature vector Including executing feature extraction respectively to 2D image district of at least one the described current barrier in the 2D image to generate pair At least one the 2D feature vector answered is using as at least one described the 2nd 2D feature vector.
3. tracking detection method as claimed in claim 2, which is characterized in that the execution feature extraction is included in 2D barrier The image overall depth characteristic layer of the deep learning frame of identification executes the operation of ROI pondization for each 2D image district to generate State at least one the 2nd 2D feature vector.
4. tracking detection method as claimed in claim 2, which is characterized in that further include:
At least one the 2D feature vector extracted from the 2D image district in the 2D image is input to convolutional layer and its correlation It is special to generate the 2D of at least one enhancing that the line rectification layer of connection and full connection layer and its associated line rectification layer execute calculating Vector is levied using as at least one described the 2nd 2D feature vector.
5. tracking detection method as described in claim 1, which is characterized in that execute depth to the multiple difference feature vector It includes executing the input of each difference feature vector two full connection layers to calculate to obtain and the multiple difference spy that study, which calculates, Levy the corresponding the multiple probability value of vector.
6. tracking detection method as described in claim 1, which is characterized in that described according to the determination of the multiple probability value Corresponding relationship between at least one current barrier and at least one described previous obstacles object includes:
For each current barrier, will there is the previous obstacles object and the current barrier of the most probable value for being higher than threshold value therewith Object is hindered to match;And
Matched current barrier and previous obstacles object are considered as same barrier, concentrating in the barrier feature vector will be right The first 2D feature vector of same barrier is answered to be updated to corresponding 2nd 2D feature vector, by the current barrier newly recognized 2nd 2D feature vector is added to the barrier feature vector and concentrates.
7. tracking detection method as claimed in claim 6, which is characterized in that described by matched current barrier and previous barrier Hinder object to be considered as same barrier to specifically include:
3D position information confirming is executed to each pair of current barrier and previous obstacles object of successful match, in response to being identified through then The matched current barrier and previous obstacles object are considered as same barrier, otherwise will not pass through the current barrier of confirmation 2nd 2D feature vector is added to the barrier feature vector and concentrates.
8. tracking detection method as claimed in claim 7, which is characterized in that each pair of current barrier to successful match Executing 3D position information confirming with previous obstacles object includes:
According to the position of the previous obstacles object, movement speed and potential steering, the position of observation point, movement speed and potential turn To and the time difference determine a spatial dimension, be in the spatial dimension in response to the current barrier and be then identified through, otherwise Confirmation failure.
9. tracking detection method as claimed in claim 6, which is characterized in that the tracking detection method further include:
Being determined according to the change in location being considered as between the current barrier of same barrier and previous obstacles object and time difference should The speed of barrier.
10. a kind of tracing and detecting apparatus for 3D barrier, for executing tracking, the tracking inspection to the barrier detected Surveying device includes:
Memory, be stored in the memory the first 2D feature of at least one previous obstacles object that characterization is previously detected to Amount;And
It is coupled to the processor of the memory, the processor is configured to:
Determine from the 3D point cloud of present frame and 2D image detection to 2D image of at least one current barrier in 2D image At least one the 2nd 2D feature vector corresponding to area;
By each of each of at least one described the 2nd 2D feature vector and barrier feature vector concentration first 2D feature vector is compared to obtain multiple difference feature vectors;
It executes deep learning to the multiple difference feature vector to calculate to generate corresponding multiple probability values, each probability value refers to Show a current barrier and a previous obstacles object is the probability of same barrier;And
It is determined between at least one described current barrier and at least one described previous obstacles object according to the multiple probability value Corresponding relationship with realize barrier track.
11. tracing and detecting apparatus as claimed in claim 10, which is characterized in that the processor is further configured to:
Feature extraction is executed respectively to 2D image district of at least one the described current barrier in the 2D image to generate pair At least one the 2D feature vector answered is using as at least one described the 2nd 2D feature vector.
12. tracing and detecting apparatus as claimed in claim 11, which is characterized in that the processor is further configured to:
The pond ROI is executed for each 2D image district in the image overall depth characteristic layer of the deep learning frame of 2D obstacle recognition Change operation to generate at least one described the 2nd 2D feature vector.
13. tracing and detecting apparatus as claimed in claim 11, which is characterized in that the processor is further configured to:
At least one the 2D feature vector extracted from the 2D image district in the 2D image is input to convolutional layer and its correlation It is special to generate the 2D of at least one enhancing that the line rectification layer of connection and full connection layer and its associated line rectification layer execute calculating Vector is levied using as at least one described the 2nd 2D feature vector.
14. tracing and detecting apparatus as claimed in claim 10, which is characterized in that the processor is further configured to:
Executing deep learning calculating to the multiple difference feature vector includes that each difference feature vector is inputted two entirely Join layer and executes calculating to obtain the multiple probability value corresponding with the multiple difference feature vector.
15. tracing and detecting apparatus as claimed in claim 10, which is characterized in that the processor is further configured to:
For each current barrier, will there is the previous obstacles object and the current barrier of the most probable value for being higher than threshold value therewith Object is hindered to match;And
Matched current barrier and previous obstacles object are considered as same barrier, concentrating in the barrier feature vector will be right The first 2D feature vector of same barrier is answered to be updated to corresponding 2nd 2D feature vector, by the current barrier newly recognized 2nd 2D feature vector is added to the barrier feature vector and concentrates.
16. tracing and detecting apparatus as claimed in claim 15, which is characterized in that the processor is further configured to:
3D position information confirming is executed to each pair of current barrier and previous obstacles object of successful match, in response to being identified through then The matched current barrier and previous obstacles object are considered as same barrier, otherwise will not pass through the current barrier of confirmation 2nd 2D feature vector is added to the barrier feature vector and concentrates.
17. tracing and detecting apparatus as claimed in claim 16, which is characterized in that the processor is further configured to:
According to the position of the previous obstacles object, movement speed and potential steering, the position of observation point, movement speed and potential turn To and the time difference determine a spatial dimension, be in the spatial dimension in response to the current barrier and be then identified through, otherwise Confirmation failure.
18. tracing and detecting apparatus as claimed in claim 15, which is characterized in that the processor is further configured to:
Being determined according to the change in location being considered as between the current barrier of same barrier and previous obstacles object and time difference should The speed of barrier.
19. a kind of tracing detection system for 3D barrier, comprising:
Image capture device, for obtaining 2D image;
Point cloud data capture device, for obtaining 3D point cloud;And
Tracing and detecting apparatus as described in any one of claim 10-18.
20. a kind of computer storage medium, is stored thereon with computer program, which is characterized in that the computer program is located It manages and is realized when device executes such as the step of any one of claim 1-9 the method.
CN201910126019.3A 2019-02-20 2019-02-20 Tracking detection method, device and system for 3D obstacle and computer storage medium Active CN109784315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910126019.3A CN109784315B (en) 2019-02-20 2019-02-20 Tracking detection method, device and system for 3D obstacle and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910126019.3A CN109784315B (en) 2019-02-20 2019-02-20 Tracking detection method, device and system for 3D obstacle and computer storage medium

Publications (2)

Publication Number Publication Date
CN109784315A true CN109784315A (en) 2019-05-21
CN109784315B CN109784315B (en) 2021-11-09

Family

ID=66504663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910126019.3A Active CN109784315B (en) 2019-02-20 2019-02-20 Tracking detection method, device and system for 3D obstacle and computer storage medium

Country Status (1)

Country Link
CN (1) CN109784315B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541416A (en) * 2020-12-02 2021-03-23 深兰科技(上海)有限公司 Cross-radar obstacle tracking method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335984A (en) * 2014-06-19 2016-02-17 株式会社理光 Method and apparatus for tracking object
CN106096516A (en) * 2016-06-01 2016-11-09 常州漫道罗孚特网络科技有限公司 The method and device that a kind of objective is followed the tracks of
CN107330925A (en) * 2017-05-11 2017-11-07 北京交通大学 A kind of multi-obstacle avoidance detect and track method based on laser radar depth image
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning
CN107748869A (en) * 2017-10-26 2018-03-02 深圳奥比中光科技有限公司 3D face identity authentications and device
CN107944435A (en) * 2017-12-27 2018-04-20 广州图语信息科技有限公司 A kind of three-dimensional face identification method, device and processing terminal
EP3327625A1 (en) * 2016-11-29 2018-05-30 Autoequips Tech Co., Ltd. Vehicle image processing method and system thereof
CN108229548A (en) * 2017-12-27 2018-06-29 华为技术有限公司 A kind of object detecting method and device
CN108537191A (en) * 2018-04-17 2018-09-14 广州云从信息科技有限公司 A kind of three-dimensional face identification method based on structure light video camera head
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335984A (en) * 2014-06-19 2016-02-17 株式会社理光 Method and apparatus for tracking object
CN106096516A (en) * 2016-06-01 2016-11-09 常州漫道罗孚特网络科技有限公司 The method and device that a kind of objective is followed the tracks of
EP3327625A1 (en) * 2016-11-29 2018-05-30 Autoequips Tech Co., Ltd. Vehicle image processing method and system thereof
CN107330925A (en) * 2017-05-11 2017-11-07 北京交通大学 A kind of multi-obstacle avoidance detect and track method based on laser radar depth image
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning
CN107748869A (en) * 2017-10-26 2018-03-02 深圳奥比中光科技有限公司 3D face identity authentications and device
CN107944435A (en) * 2017-12-27 2018-04-20 广州图语信息科技有限公司 A kind of three-dimensional face identification method, device and processing terminal
CN108229548A (en) * 2017-12-27 2018-06-29 华为技术有限公司 A kind of object detecting method and device
CN108537191A (en) * 2018-04-17 2018-09-14 广州云从信息科技有限公司 A kind of three-dimensional face identification method based on structure light video camera head
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541416A (en) * 2020-12-02 2021-03-23 深兰科技(上海)有限公司 Cross-radar obstacle tracking method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109784315B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN105378506B (en) It is moved into as platform alignment
EP2948927B1 (en) A method of detecting structural parts of a scene
Zhao et al. A vehicle-borne urban 3-D acquisition system using single-row laser range scanners
JP6589926B2 (en) Object detection device
CN108920584A (en) A kind of semanteme grating map generation method and its device
JP5105596B2 (en) Travel route determination map creation device and travel route determination map creation method for autonomous mobile body
Levinson Automatic laser calibration, mapping, and localization for autonomous vehicles
CN109948448A (en) For the detection method of 3D barrier, device, system and computer storage medium
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
US20210326613A1 (en) Vehicle detection method and device
RU2764708C1 (en) Methods and systems for processing lidar sensor data
CN208289901U (en) A kind of positioning device and robot enhancing vision
CN112781599A (en) Method for determining the position of a vehicle
Yang et al. Automated wall‐climbing robot for concrete construction inspection
CN116597122A (en) Data labeling method, device, electronic equipment and storage medium
JP6351917B2 (en) Moving object detection device
WO2022151011A1 (en) Positioning method and apparatus, and vehicle
CN109784315A (en) Tracking detection method, device, system and the computer storage medium of 3D barrier
CN103456026B (en) A kind of Ground moving target detection method under highway terrestrial reference constraint
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
Roh et al. Aerial image based heading correction for large scale SLAM in an urban canyon
JP2013073250A (en) Self position estimation device, method, and program
Vaida et al. Automatic extrinsic calibration of LIDAR and monocular camera images
Boschenriedter et al. Multi-session visual roadway mapping
Schilling et al. Mind the gap-a benchmark for dense depth prediction beyond lidar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant