CN108021891A - The vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system - Google Patents
The vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system Download PDFInfo
- Publication number
- CN108021891A CN108021891A CN201711270959.7A CN201711270959A CN108021891A CN 108021891 A CN108021891 A CN 108021891A CN 201711270959 A CN201711270959 A CN 201711270959A CN 108021891 A CN108021891 A CN 108021891A
- Authority
- CN
- China
- Prior art keywords
- information
- confidence level
- algorithm
- deep learning
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
A kind of vehicle environmental recognition methods combined based on deep learning with traditional algorithm disclosed by the invention, it mainly includes point cloud information and video information acquisition step, video information confidence level estimation and point cloud information confidence level estimation step, algoritic module selection step, point cloud information and video information integration step, target identification and result output step.The present invention can effectively improve the precision and reliability of radar and video information process, be effectively improved the performance that vehicle environmental identifies.
Description
Technical field
The present invention relates to vehicle active safety field, more particularly to the vehicle that a kind of deep learning is combined with traditional algorithm
Context awareness system and method.
Background technology
Vehicle environmental identifying system usually using radar and video as main detecting sensor, carry out lane detection,
Traffic target identification etc..Wherein, it is to detect target data accuracy height and target object can be obtained the advantages of radar
The advantages of velocity information, video is can to differentiate and " seeing " target object, by the way that both are had complementary advantages, can improve vehicle
The precision and reliability of Context awareness.
In traditional algorithm, the processing of radar and video information is generally by filtering, splitting, feature extraction, matching, target are known
Not and etc., realize the application of radar and video data.But the performance of the data processing technique based on traditional algorithm can because
Vehicle running environment change, which has, significantly to be fluctuated, such as rainy days, backlight, ponding, roadside shade weather, illumination and road conditions feelings
The change of condition, may cause such as:Situations such as lane line flase drop, traffic erroneous judgement, connecting way segmentation errors, cause vehicle
The reliability and confidence level of Context awareness reduce.
In recent years, deep learning is continuous in the vehicle environmental identification such as target identification, lane detection, connecting way segmentation field
Development, and achieve higher accuracy of identification and reliability.But the deficiency of deep learning is that calculating is time-consuming longer, if
All cloud datas and view data are all handled using depth learning technology, then more difficult to meet wanting for real time environment identification
Ask.
Therefore, with reference to deep learning and traditional algorithm processing radar and video data, effective supplement is achieved, meets car
The precision and real-time demand of Context awareness, have important improvement to act on for the performance of vehicle environmental identification.So as to, how
Merge just becomes an important sport technique segment using deep learning and traditional algorithm processing radar and video data.
The content of the invention
The purpose of the present invention is handle the traditional algorithm of radar and video information not in vehicle environmental identifying system
Foot, discloses a kind of vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system, by depth learning technology
Vehicle environmental identification is introduced, the radar different to confidence level takes different deep learnings or traditional algorithm mutually to tie with video information
The processing mode of conjunction, to improve the precision of information processing and reliability, is effectively improved the performance that vehicle environmental identifies.
A kind of vehicle environmental recognition methods combined based on deep learning with traditional algorithm disclosed by the invention, including it is following
Step:
Point cloud information and video information collection:The point cloud information of collection vehicle environment is distinguished by radar and camera and is regarded
Frequency information;
Video information confidence level estimation:With video image mean square error in video information, Y-PSNR, colour cast value, dry
Degree of disturbing, clarity, shading value carry out quantitative evaluation for the confidence level function of parameter setting video information, obtain putting for video information
Reliability;
Point cloud information confidence level estimation:With cloud information midpoint cloud noise spot quantity, point cloud noise spot maximum deviation degree, a point
Cloud noise spot average departure degree, and point cloud cavity quantity, point cloud cavity maximum radius, point cloud cavity mean radius set for parameter
The confidence level function for putting a cloud information carries out quantitative evaluation, obtains a confidence level for cloud information;
Algoritic module selects:Judge that the confidence level of a video information/cloud information is more than the predetermined valve of the first predetermined threshold/the second
Value, then be sent into a video information/cloud information traditional algorithm module and handled, otherwise, is sent at deep learning module
Reason;
Point cloud information and video information are integrated:Information fusion is carried out to processing rear video information and point cloud information;
Target identification and result output:Target identification is carried out using the information after fusion, and exports recognition result.
Further, in algoritic module selection step, the confidence level of video information is normalized as follows:
Wherein, cp is the confidence level estimation value after video information normalization, and c1 is video image mean square error, and c2 is peak value
Signal-to-noise ratio, c3 are colour cast values, and c4 is degree of disturbance, and c5 is clarity, and c6 is shading value, and wc1, wc2, wc3, wc4, wc5, wc6 are
Weight coefficient, and wc1+wc2+wc3+wc4+wc5+wc6=1.
Further, in algoritic module selection step, a confidence level for cloud information is normalized as follows:
Wherein, sp is the confidence level estimation value after a cloud information normalization, and s1 is point cloud noise spot quantity, and s2 is that a cloud is made an uproar
Sound point maximum deviation degree, s3 are point cloud noise spot average departure degree, and s4 is a cloud cavity quantity, s5 be a cloud cavity maximum half
Footpath, s6 are a cloud cavity mean radius, ws1, ws2, ws3, ws4, and ws5, ws6 are weight coefficient, and ws1+ws2+ws3+ws4+
Ws5+ws6=1.
Further, algoritic module selection step includes:
The confidence level estimation value after the corresponding normalization of video information of deep learning module is will be fed into, is calculated with deep learning
The video information normalization confidence level scheduling thresholds section set in Faku County is made comparisons, and dispatches corresponding deep learning algorithm conduct
The method of this video information process;
The confidence level estimation value after the corresponding normalization of point cloud information of deep learning module is will be fed into, is calculated with deep learning
The point cloud information normalization confidence level scheduling thresholds section set in Faku County is made comparisons, and dispatches corresponding deep learning algorithm conduct
The method of this cloud information processing.
Further, algoritic module selection step includes:
The confidence level estimation value after the corresponding normalization of video information of traditional algorithm module is will be fed into, with traditional algorithm storehouse
The video information normalization confidence level scheduling thresholds section of middle setting is made comparisons, and dispatches corresponding traditional algorithm as this video
The method of information processing;
The confidence level estimation value after the corresponding normalization of point cloud information of traditional algorithm module is will be fed into, with traditional algorithm storehouse
The point cloud information normalization confidence level scheduling thresholds section of middle setting is made comparisons, and dispatches corresponding traditional algorithm as this cloud
The method of information processing.
Further, the deep learning algorithm in the deep learning algorithms library include SSD algorithms, VGG-16 algorithms,
Faster-RCNN algorithms, YOLO algorithms, Overfeat algorithms.
Further, the traditional algorithm in the traditional algorithm storehouse includes fixed threshold foreground extraction algorithm, adaptive thresholding
It is worth foreground extraction algorithm, mixed Gaussian background modeling algorithm, edge detection algorithm, Corner Detection Algorithm, tagsort algorithm.
Further, the foundation that corresponding deep learning algorithm is dispatched in algoritic module selection step is as follows:
0≤c<When 0.2, SSD algorithms are dispatched;0.2≤T1<When 0.4, VGG-16 algorithms are dispatched;0.4≤T1<When 0.6, adjust
Spend Faster-RCNN algorithms;0.6≤T1<When 0.8, YOLO algorithms are dispatched;0.8≤T1<When 1, Overfeat algorithms are dispatched;T1
For the confidence level estimation value cp/sp being sent into after a video information/corresponding normalization of cloud information for deep learning module.
Further, the foundation that corresponding traditional algorithm is dispatched in algoritic module selection step is as follows:
0≤T2<When 0.2, fixed threshold foreground extraction algorithm and edge detection algorithm are dispatched;0.2≤T2<When 0.4, scheduling
Adaptive threshold foreground extraction algorithm and edge detection algorithm;0.4≤T2<When 0.6, scheduling mixed Gaussian background modeling algorithm and
Edge detection algorithm;0.6≤T2<When 0.8, scheduling mixed Gaussian background modeling algorithm, Corner Detection Algorithm and edge detection are calculated
Method;0.8≤T2<When 1, mixed Gaussian background modeling algorithm and tagsort algorithm are dispatched;T2 is the video for being sent into conventional module
Confidence level estimation value cp/sp after an information/corresponding normalization of cloud information.
Corresponding disclosed a kind of vehicle environmental identifying system combined based on deep learning with traditional algorithm of the invention, including
Environmental perception module and Context awareness module;Wherein, environmental perception module includes camera and radar, is respectively used to collection vehicle
The point cloud information and video information of environment;
Context awareness module include camera confidence level estimation unit, radar confidence level estimation unit, algorithms selection unit,
Deep learning processing unit, traditional algorithm processing unit, information fusion unit, target identification and result output unit;
Camera confidence level estimation unit:With video image mean square error in video information, Y-PSNR, colour cast value,
Degree of disturbance, clarity, shading value carry out quantitative evaluation for the confidence level function of parameter setting video information, obtain video information
Confidence level;
Radar confidence level estimation unit:With cloud information midpoint cloud noise spot quantity, point cloud noise spot maximum deviation degree, a point
Cloud noise spot average departure degree, and point cloud cavity quantity, point cloud cavity maximum radius, point cloud cavity mean radius set for parameter
The confidence level function for putting a cloud information carries out quantitative evaluation, obtains a confidence level for cloud information;
Algorithms selection unit:Judge that the confidence level of a video information/cloud information is more than the predetermined valve of the first predetermined threshold/the second
Value, then be sent into a video information/cloud information traditional algorithm processing unit and handled, otherwise, it is single to be sent into deep learning processing
Member is handled;
Deep learning processing unit:Scheduling deep learning, which is calculated, to be handled being sent into a video information/cloud information therein;
Traditional algorithm processing unit:Scheduling tradition study, which is calculated, to be handled being sent into a video information/cloud information therein;
Information fusion unit:Information fusion is carried out to processing rear video information and point cloud information;
Target identification and result output unit:Target identification is carried out using the information after fusion, and exports recognition result.
The present invention takes different deep learnings or traditional algorithm, has for the confidence level of radar and vision signal height
Effect improves the precision and reliability of radar and video information process, is effectively improved the performance that vehicle environmental identifies.
Brief description of the drawings
Fig. 1 is that embodiment one is disclosed to be shown based on the vehicle environmental recognition methods flow that deep learning is combined with traditional algorithm
It is intended to.
Fig. 2 is deep learning algorithms library and scheduling thresholds example in embodiment one.
Fig. 3 is traditional learning algorithm storehouse and scheduling thresholds example in embodiment one.
Fig. 4 is that embodiment two is disclosed to be shown based on the vehicle environmental identifying system structure that deep learning is combined with traditional algorithm
It is intended to.
Embodiment
To make the purpose of the present invention, technical solution and effect clearer, clear and definite, develop simultaneously embodiment pair referring to the drawings
The present invention is further described.It should be appreciated that specific embodiment described herein is not used to only to explain the present invention
Limit the present invention.
Embodiment one
A kind of referring to Fig. 1, vehicle environmental identification combined based on deep learning with traditional algorithm disclosed in the present embodiment
Method, including step S101 to S105:
Step S101, puts cloud information and video information collection.
The point cloud information and video information of collection vehicle environment are distinguished in step S101 by radar and camera.
Step S102, video information confidence level estimation and point cloud information confidence level estimation.
In step S102, video information confidence level estimation is specially:With video image mean square error, peak value in video information
Signal-to-noise ratio, colour cast value, degree of disturbance, clarity, shading value carry out quantitative evaluation for the confidence level function of parameter setting video information,
Obtain the confidence level of video information.
In step S102, point cloud information confidence level estimation is specially:Made an uproar with a cloud information midpoint cloud noise spot quantity, point cloud
Sound point maximum deviation degree, point cloud noise spot average departure degree, and point cloud cavity quantity, point cloud cavity maximum radius, point cloud are empty
Hole mean radius carries out quantitative evaluation for the confidence level function of parameter set point cloud information, obtains a confidence level for cloud information;
Step S103, algoritic module selection.
In step S103, judge that the confidence level of video information is more than the first predetermined threshold, then video information is sent into tradition
Algoritic module is handled, and otherwise, is sent into deep learning module and is handled.Similar, judge that a confidence level for cloud information is big
In the second reservation threshold, then a cloud information is sent into traditional algorithm module and is handled, otherwise, be sent into deep learning module and carry out
Processing.As reference, the first predetermined threshold and the second predetermined threshold may each be 0.6.
Step S104, puts cloud information and video information is integrated.
Information fusion is carried out to processing rear video information and point cloud information in step S104.
Step S105, target identification and result output.
Target identification is carried out using the information after fusion in step S105, and exports recognition result.
In further scheme, in step S103 (i.e. algoritic module selection step), the confidence level of video information is carried out such as
Lower normalization:
Wherein, cp is the confidence level estimation value after video information normalization, and c1 is video image mean square error, and c2 is peak value
Signal-to-noise ratio, c3 are colour cast values, and c4 is degree of disturbance, and c5 is clarity, and c6 is shading value, and wc1, wc2, wc3, wc4, wc5, wc6 are
Weight coefficient, and wc1+wc2+wc3+wc4+wc5+wc6=1.
In further scheme, in step S103 (i.e. algoritic module selection step), the confidence level of a cloud information is carried out such as
Lower normalization:
Wherein, sp is the confidence level estimation value after a cloud information normalization, and s1 is point cloud noise spot quantity, and s2 is that a cloud is made an uproar
Sound point maximum deviation degree, s3 are point cloud noise spot average departure degree, and s4 is a cloud cavity quantity, s5 be a cloud cavity maximum half
Footpath, s6 are a cloud cavity mean radius, ws1, ws2, ws3, ws4, and ws5, ws6 are weight coefficient, and ws1+ws2+ws3+ws4+
Ws5+ws6=1.
In further scheme, deep learning algorithms library and traditional algorithm storehouse are provided with the present embodiment.Wherein, deep learning
Deep learning algorithm in algorithms library include SSD algorithms, VGG-16 algorithms, Faster-RCNN algorithms, YOLO algorithms,
Overfeat algorithms;Traditional algorithm in traditional algorithm storehouse includes fixed threshold foreground extraction algorithm, adaptive threshold prospect carries
Take algorithm, mixed Gaussian background modeling algorithm, edge detection algorithm, Corner Detection Algorithm, tagsort algorithm.In other implementations
In example, deep learning algorithm and traditional algorithm can according to circumstances select other algorithms.
So as in further scheme, be further included in step S103 (i.e. algoritic module selection step):
The confidence level estimation value after the corresponding normalization of video information of deep learning module is will be fed into, is calculated with deep learning
The video information normalization confidence level scheduling thresholds section set in Faku County is made comparisons, and dispatches corresponding deep learning algorithm conduct
The method of this video information process;
The confidence level estimation value after the corresponding normalization of point cloud information of deep learning module is will be fed into, is calculated with deep learning
The point cloud information normalization confidence level scheduling thresholds section set in Faku County is made comparisons, and dispatches corresponding deep learning algorithm conduct
The method of this cloud information processing.
Referring to Fig. 2, the foundation for dispatching corresponding deep learning algorithm is preferably as follows:
0≤c<When 0.2, SSD algorithms are dispatched;0.2≤T1<When 0.4, VGG-16 algorithms are dispatched;0.4≤T1<When 0.6, adjust
Spend Faster-RCNN algorithms;0.6≤T1<When 0.8, YOLO algorithms are dispatched;0.8≤T1<When 1, Overfeat algorithms are dispatched;T1
For the confidence level estimation value cp/sp being sent into after a video information/corresponding normalization of cloud information for deep learning module.
Correspondingly, in further scheme, further included in step S103 (i.e. algoritic module selection step):
The confidence level estimation value after the corresponding normalization of video information of traditional algorithm module is will be fed into, with traditional algorithm storehouse
The video information normalization confidence level scheduling thresholds section of middle setting is made comparisons, and dispatches corresponding traditional algorithm as this video
The method of information processing;
The confidence level estimation value after the corresponding normalization of point cloud information of traditional algorithm module is will be fed into, with traditional algorithm storehouse
The point cloud information normalization confidence level scheduling thresholds section of middle setting is made comparisons, and dispatches corresponding traditional algorithm as this cloud
The method of information processing.
Referring to Fig. 3, the foundation for dispatching corresponding traditional algorithm is preferably as follows:
0≤T2<When 0.2, fixed threshold foreground extraction algorithm and edge detection algorithm are dispatched;0.2≤T2<When 0.4, scheduling
Adaptive threshold foreground extraction algorithm and edge detection algorithm;0.4≤T2<When 0.6, scheduling mixed Gaussian background modeling algorithm and
Edge detection algorithm;0.6≤T2<When 0.8, scheduling mixed Gaussian background modeling algorithm, Corner Detection Algorithm and edge detection are calculated
Method;0.8≤T2<When 1, mixed Gaussian background modeling algorithm and tagsort algorithm are dispatched;T2 is the video for being sent into conventional module
Confidence level estimation value cp/sp after an information/corresponding normalization of cloud information.
Embodiment two
A kind of referring to Fig. 4, vehicle environmental identification combined based on deep learning with traditional algorithm disclosed in embodiment two
System, including environmental perception module 100 and Context awareness module 200;Wherein, environmental perception module 100 includes 102 He of camera
Radar 104, is respectively used to the point cloud information and video information of collection vehicle environment.
Context awareness module 200 include camera confidence level estimation unit 202, radar confidence level estimation unit 204,
Algorithms selection unit 206, deep learning processing unit 208, traditional algorithm processing unit 210, information fusion unit 212, target
Identification and result output unit 214.The content that each unit performs is as follows:
Camera confidence level estimation unit 202:With video image mean square error, Y-PSNR, colour cast in video information
Value, degree of disturbance, clarity, shading value carry out quantitative evaluation for the confidence level function of parameter setting video information, obtain video letter
The confidence level of breath.
Radar confidence level estimation unit 204:With cloud information midpoint cloud noise spot quantity, a point cloud noise spot maximum deviation
Degree, point cloud noise spot average departure degree, and put cloud cavity quantity, point cloud cavity maximum radius, point cloud cavity mean radius and be
The confidence level function of parameter set point cloud information carries out quantitative evaluation, obtains a confidence level for cloud information.
Algorithms selection unit 206:Judge that the confidence level of a video information/cloud information is pre- more than the first predetermined threshold/the second
Determine threshold values, then a video information/cloud information is sent into traditional algorithm processing unit 210 and is handled, otherwise, be sent into deep learning
Processing unit 208 is handled.
Deep learning processing unit 208:Deep learning is dispatched to calculate to being sent at a video information/cloud information therein
Reason.
Traditional algorithm processing unit 210:Scheduling tradition study is calculated to being sent at a video information/cloud information therein
Reason.
Information fusion unit 212:Information fusion is carried out to processing rear video information and point cloud information.
Target identification and result output unit 214:Target identification is carried out using the information after fusion, and exports identification knot
Fruit.
The operation principle of embodiment two refer to embodiment one, and which is not described herein again.
Above-described embodiment one and embodiment two take different depth for the confidence level of radar and vision signal height
Habit or traditional algorithm, effectively increase the precision and reliability of radar and video information process, the performance for identifying vehicle environmental
It is effectively improved.
It should be appreciated that for those of ordinary skills, can according to the above description be improved or converted,
And all these modifications and variations should all belong to the protection domain of appended claims of the present invention.
Claims (10)
1. a kind of vehicle environmental recognition methods combined based on deep learning with traditional algorithm, it is characterised in that including following step
Suddenly:
Point cloud information and video information collection:The point cloud information of collection vehicle environment is distinguished by radar and camera and video is believed
Breath;
Video information confidence level estimation:With video image mean square error in video information, Y-PSNR, colour cast value, degree of disturbance,
Clarity, shading value carry out quantitative evaluation for the confidence level function of parameter setting video information, obtain the confidence level of video information;
Point cloud information confidence level estimation:Made an uproar with a cloud information midpoint cloud noise spot quantity, point cloud noise spot maximum deviation degree, point cloud
Sound point average departure degree, and point cloud cavity quantity, point cloud cavity maximum radius, point cloud cavity mean radius are parameter set point
The confidence level function of cloud information carries out quantitative evaluation, obtains a confidence level for cloud information;
Algoritic module selects:Judge that the confidence level of a video information/cloud information is more than the reservation threshold of the first predetermined threshold/second,
Then a video information/cloud information is sent into traditional algorithm module and is handled, otherwise, is sent into deep learning module and is handled;
Point cloud information and video information are integrated:Information fusion is carried out to processing rear video information and point cloud information;
Target identification and result output:Target identification is carried out using the information after fusion, and exports recognition result.
2. the vehicle environmental recognition methods according to claim 1 combined based on deep learning with traditional algorithm, its feature
It is, in algoritic module selection step, the confidence level of video information is normalized as follows:
<mrow>
<mi>c</mi>
<mi>p</mi>
<mo>=</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mi>w</mi>
<mi>c</mi>
<mn>1</mn>
<mo>*</mo>
<mi>c</mi>
<mn>1</mn>
<mo>+</mo>
<mi>w</mi>
<mi>c</mi>
<mn>2</mn>
<mo>*</mo>
<mi>c</mi>
<mn>2</mn>
<mo>+</mo>
<mi>w</mi>
<mi>c</mi>
<mn>3</mn>
<mo>*</mo>
<mi>c</mi>
<mn>3</mn>
<mo>+</mo>
<mi>w</mi>
<mi>c</mi>
<mn>4</mn>
<mo>*</mo>
<mi>c</mi>
<mn>4</mn>
<mo>+</mo>
<mi>w</mi>
<mi>c</mi>
<mn>5</mn>
<mo>*</mo>
<mi>c</mi>
<mn>5</mn>
<mo>+</mo>
<mi>w</mi>
<mi>c</mi>
<mn>6</mn>
<mo>*</mo>
<mi>c</mi>
<mn>6</mn>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<mi>c</mi>
<mn>1</mn>
<mo>+</mo>
<mi>c</mi>
<mn>2</mn>
<mo>+</mo>
<mi>c</mi>
<mn>3</mn>
<mo>+</mo>
<mi>c</mi>
<mn>4</mn>
<mo>+</mo>
<mi>c</mi>
<mn>5</mn>
<mo>+</mo>
<mi>c</mi>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mfrac>
</mrow>
Wherein, cp is the confidence level estimation value after video information normalization, and c1 is video image mean square error, and c2 is peak value noise
Than c3 is colour cast value, and c4 is degree of disturbance, and c5 is clarity, and c6 is shading value, and wc1, wc2, wc3, wc4, wc5, wc6 are weight
Coefficient, and wc1+wc2+wc3+wc4+wc5+wc6=1.
3. the vehicle environmental recognition methods according to claim 2 combined based on deep learning with traditional algorithm, its feature
It is, in algoritic module selection step, a confidence level for cloud information is normalized as follows:
<mrow>
<mi>s</mi>
<mi>p</mi>
<mo>=</mo>
<mfrac>
<mrow>
<mi>w</mi>
<mi>s</mi>
<mn>1</mn>
<mo>*</mo>
<mi>s</mi>
<mn>1</mn>
<mo>+</mo>
<mi>w</mi>
<mi>s</mi>
<mn>2</mn>
<mo>*</mo>
<mi>s</mi>
<mn>2</mn>
<mo>+</mo>
<mi>w</mi>
<mi>s</mi>
<mn>3</mn>
<mo>*</mo>
<mi>s</mi>
<mn>3</mn>
<mo>+</mo>
<mi>w</mi>
<mi>s</mi>
<mn>4</mn>
<mo>*</mo>
<mi>s</mi>
<mn>4</mn>
<mo>+</mo>
<mi>w</mi>
<mi>s</mi>
<mn>5</mn>
<mo>*</mo>
<mi>s</mi>
<mn>5</mn>
<mo>+</mo>
<mi>w</mi>
<mi>s</mi>
<mn>6</mn>
<mo>*</mo>
<mi>s</mi>
<mn>6</mn>
</mrow>
<mrow>
<mo>(</mo>
<mi>s</mi>
<mn>1</mn>
<mo>+</mo>
<mi>s</mi>
<mn>2</mn>
<mo>+</mo>
<mi>s</mi>
<mn>3</mn>
<mo>+</mo>
<mi>s</mi>
<mn>4</mn>
<mo>+</mo>
<mi>s</mi>
<mn>5</mn>
<mo>+</mo>
<mi>s</mi>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mfrac>
</mrow>
Wherein, sp is the confidence level estimation value after a cloud information normalization, and s1 is point cloud noise spot quantity, and s2 is point cloud noise spot
Maximum deviation degree, s3 are point cloud noise spot average departure degree, and s4 is a cloud cavity quantity, and s5 is a cloud cavity maximum radius, s6
It is a cloud cavity mean radius, ws1, ws2, ws3, ws4, ws5, ws6 are weight coefficient, and ws1+ws2+ws3+ws4+ws5+
Ws6=1.
4. the vehicle environmental recognition methods according to claim 3 combined based on deep learning with traditional algorithm, its feature
It is, algoritic module selection step includes:
The confidence level estimation value after the corresponding normalization of video information of deep learning module is will be fed into, with deep learning algorithms library
The video information normalization confidence level scheduling thresholds section of middle setting is made comparisons, and is dispatched corresponding deep learning algorithm and is used as this
The method of video information process;
The confidence level estimation value after the corresponding normalization of point cloud information of deep learning module is will be fed into, with deep learning algorithms library
The point cloud information normalization confidence level scheduling thresholds section of middle setting is made comparisons, and is dispatched corresponding deep learning algorithm and is used as this
The method of point cloud information processing.
5. the vehicle environmental recognition methods according to claim 4 combined based on deep learning with traditional algorithm, its feature
It is, algoritic module selection step includes:
The confidence level estimation value after the corresponding normalization of video information of traditional algorithm module is will be fed into, with being set in traditional algorithm storehouse
Fixed video information normalization confidence level scheduling thresholds section is made comparisons, and dispatches corresponding traditional algorithm as this video information
The method of processing;
The confidence level estimation value after the corresponding normalization of point cloud information of traditional algorithm module is will be fed into, with being set in traditional algorithm storehouse
Fixed point cloud information normalization confidence level scheduling thresholds section is made comparisons, and dispatches corresponding traditional algorithm as this cloud information
The method of processing.
6. the vehicle environmental recognition methods according to claim 5 combined based on deep learning with traditional algorithm, its feature
It is, the deep learning algorithm in the deep learning algorithms library includes SSD algorithms, VGG-16 algorithms, Faster-RCNN and calculates
Method, YOLO algorithms, Overfeat algorithms.
7. the vehicle environmental recognition methods according to claim 6 combined based on deep learning with traditional algorithm, its feature
It is, the traditional algorithm in the traditional algorithm storehouse includes fixed threshold foreground extraction algorithm, adaptive threshold foreground extraction is calculated
Method, mixed Gaussian background modeling algorithm, edge detection algorithm, Corner Detection Algorithm, tagsort algorithm.
8. the vehicle environmental recognition methods according to claim 7 combined based on deep learning with traditional algorithm, its feature
It is, the foundation that corresponding deep learning algorithm is dispatched in algoritic module selection step is as follows:
0≤c<When 0.2, SSD algorithms are dispatched;0.2≤T1<When 0.4, VGG-16 algorithms are dispatched;0.4≤T1<When 0.6, scheduling
Faster-RCNN algorithms;0.6≤T1<When 0.8, YOLO algorithms are dispatched;0.8≤T1<When 1, Overfeat algorithms are dispatched;T1 is
The confidence level estimation value cp/sp being sent into after a video information/corresponding normalization of cloud information for deep learning module.
9. the vehicle environmental recognition methods according to claim 8 combined based on deep learning with traditional algorithm, its feature
It is, the foundation that corresponding traditional algorithm is dispatched in algoritic module selection step is as follows:
0≤T2<When 0.2, fixed threshold foreground extraction algorithm and edge detection algorithm are dispatched;0.2≤T2<When 0.4, scheduling is adaptive
Answer threshold value foreground extraction algorithm and edge detection algorithm;0.4≤T2<When 0.6, mixed Gaussian background modeling algorithm and edge are dispatched
Detection algorithm;0.6≤T2<When 0.8, scheduling mixed Gaussian background modeling algorithm, Corner Detection Algorithm and edge detection algorithm;
0.8≤T2<When 1, mixed Gaussian background modeling algorithm and tagsort algorithm are dispatched;T2 is the video letter for being sent into conventional module
Confidence level estimation value cp/sp after a breath/corresponding normalization of cloud information.
10. a kind of vehicle environmental identifying system combined based on deep learning with traditional algorithm, it is characterised in that including environment sense
Know module and Context awareness module;Wherein, environmental perception module includes camera and radar, is respectively used to collection vehicle environment
Point cloud information and video information;
Context awareness module includes camera confidence level estimation unit, radar confidence level estimation unit, algorithms selection unit, depth
Learn processing unit, traditional algorithm processing unit, information fusion unit, target identification and result output unit;
Camera confidence level estimation unit:With video image mean square error, Y-PSNR, colour cast value, interference in video information
Degree, clarity, shading value carry out quantitative evaluation for the confidence level function of parameter setting video information, obtain the confidence of video information
Degree;
Radar confidence level estimation unit:Made an uproar with a cloud information midpoint cloud noise spot quantity, point cloud noise spot maximum deviation degree, point cloud
Sound point average departure degree, and point cloud cavity quantity, point cloud cavity maximum radius, point cloud cavity mean radius are parameter set point
The confidence level function of cloud information carries out quantitative evaluation, obtains a confidence level for cloud information;
Algorithms selection unit:Judge that the confidence level of a video information/cloud information is more than the reservation threshold of the first predetermined threshold/second,
Video information/cloud information then is sent into a traditional algorithm processing unit to be handled, otherwise, be sent into deep learning processing unit into
Row processing;
Deep learning processing unit:Scheduling deep learning, which is calculated, to be handled being sent into a video information/cloud information therein;
Traditional algorithm processing unit:Scheduling tradition study, which is calculated, to be handled being sent into a video information/cloud information therein;
Information fusion unit:Information fusion is carried out to processing rear video information and point cloud information;
Target identification and result output unit:Target identification is carried out using the information after fusion, and exports recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711270959.7A CN108021891B (en) | 2017-12-05 | 2017-12-05 | Vehicle environment identification method and system based on combination of deep learning and traditional algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711270959.7A CN108021891B (en) | 2017-12-05 | 2017-12-05 | Vehicle environment identification method and system based on combination of deep learning and traditional algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108021891A true CN108021891A (en) | 2018-05-11 |
CN108021891B CN108021891B (en) | 2020-04-14 |
Family
ID=62078637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711270959.7A Active CN108021891B (en) | 2017-12-05 | 2017-12-05 | Vehicle environment identification method and system based on combination of deep learning and traditional algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108021891B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151285A (en) * | 2018-10-11 | 2019-01-04 | 西安神洲雷达科技有限公司 | A kind of photoelectricity patrolling control system and method, photoelectricity logging |
CN109263649A (en) * | 2018-08-21 | 2019-01-25 | 北京汽车股份有限公司 | Object identification method and object identification system under vehicle and its automatic driving mode |
CN109934230A (en) * | 2018-09-05 | 2019-06-25 | 浙江大学 | A kind of radar points cloud dividing method of view-based access control model auxiliary |
CN110008843A (en) * | 2019-03-11 | 2019-07-12 | 武汉环宇智行科技有限公司 | Combine cognitive approach and system based on the vehicle target of cloud and image data |
CN111126153A (en) * | 2019-11-25 | 2020-05-08 | 北京锐安科技有限公司 | Safety monitoring method, system, server and storage medium based on deep learning |
CN112926365A (en) * | 2019-12-06 | 2021-06-08 | 广州汽车集团股份有限公司 | Lane line detection method and system |
CN114994046A (en) * | 2022-04-19 | 2022-09-02 | 深圳格芯集成电路装备有限公司 | Defect detection system based on deep learning model |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111906782B (en) * | 2020-07-08 | 2021-07-13 | 西安交通大学 | Intelligent robot grabbing method based on three-dimensional vision |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825173A (en) * | 2016-03-11 | 2016-08-03 | 福州华鹰重工机械有限公司 | Universal road and lane detection system and method |
US20170038466A1 (en) * | 2013-09-10 | 2017-02-09 | Scania Cv Ab | Detection of an object by use of a 3d camera and a radar |
CN106650647A (en) * | 2016-12-09 | 2017-05-10 | 开易(深圳)科技有限公司 | Vehicle detection method and system based on cascading of traditional algorithm and deep learning algorithm |
CN106981201A (en) * | 2017-05-11 | 2017-07-25 | 南宁市正祥科技有限公司 | vehicle identification method under complex environment |
-
2017
- 2017-12-05 CN CN201711270959.7A patent/CN108021891B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170038466A1 (en) * | 2013-09-10 | 2017-02-09 | Scania Cv Ab | Detection of an object by use of a 3d camera and a radar |
CN105825173A (en) * | 2016-03-11 | 2016-08-03 | 福州华鹰重工机械有限公司 | Universal road and lane detection system and method |
CN106650647A (en) * | 2016-12-09 | 2017-05-10 | 开易(深圳)科技有限公司 | Vehicle detection method and system based on cascading of traditional algorithm and deep learning algorithm |
CN106981201A (en) * | 2017-05-11 | 2017-07-25 | 南宁市正祥科技有限公司 | vehicle identification method under complex environment |
Non-Patent Citations (3)
Title |
---|
SINAN HASIRLIOGLU 等: "Test Methodology for Rain Influence on Automotive Surround Sensors", 《2016 IEEE 19TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)》 * |
徐文涛: "基于FPGA车辆防碰避障技术的", 《中国优秀硕士学位论文全文数据库》 * |
綦科 等: "基于八叉树空间分割的三维点云模型密写", 《计算机工程》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109263649A (en) * | 2018-08-21 | 2019-01-25 | 北京汽车股份有限公司 | Object identification method and object identification system under vehicle and its automatic driving mode |
CN109934230A (en) * | 2018-09-05 | 2019-06-25 | 浙江大学 | A kind of radar points cloud dividing method of view-based access control model auxiliary |
CN109151285A (en) * | 2018-10-11 | 2019-01-04 | 西安神洲雷达科技有限公司 | A kind of photoelectricity patrolling control system and method, photoelectricity logging |
CN110008843A (en) * | 2019-03-11 | 2019-07-12 | 武汉环宇智行科技有限公司 | Combine cognitive approach and system based on the vehicle target of cloud and image data |
CN110008843B (en) * | 2019-03-11 | 2021-01-05 | 武汉环宇智行科技有限公司 | Vehicle target joint cognition method and system based on point cloud and image data |
CN111126153A (en) * | 2019-11-25 | 2020-05-08 | 北京锐安科技有限公司 | Safety monitoring method, system, server and storage medium based on deep learning |
CN112926365A (en) * | 2019-12-06 | 2021-06-08 | 广州汽车集团股份有限公司 | Lane line detection method and system |
CN114994046A (en) * | 2022-04-19 | 2022-09-02 | 深圳格芯集成电路装备有限公司 | Defect detection system based on deep learning model |
Also Published As
Publication number | Publication date |
---|---|
CN108021891B (en) | 2020-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108021891A (en) | The vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system | |
JP6565967B2 (en) | Road obstacle detection device, method, and program | |
CN109087510B (en) | Traffic monitoring method and device | |
US9245188B2 (en) | Lane detection system and method | |
CN102289660B (en) | Method for detecting illegal driving behavior based on hand gesture tracking | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN109061600B (en) | Target identification method based on millimeter wave radar data | |
US9429650B2 (en) | Fusion of obstacle detection using radar and camera | |
CN105975913B (en) | Road network extraction method based on adaptive cluster learning | |
CN109460709A (en) | The method of RTG dysopia analyte detection based on the fusion of RGB and D information | |
CN105460009B (en) | Automobile control method and device | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
CN114299417A (en) | Multi-target tracking method based on radar-vision fusion | |
CN107578012B (en) | Driving assistance system for selecting sensitive area based on clustering algorithm | |
CN103383733A (en) | Lane video detection method based on half-machine study | |
CN104598908A (en) | Method for recognizing diseases of crop leaves | |
CN103455820A (en) | Method and system for detecting and tracking vehicle based on machine vision technology | |
CN110008932A (en) | A kind of vehicle violation crimping detection method based on computer vision | |
CN101701818A (en) | Method for detecting long-distance barrier | |
CN106326822A (en) | Method and device for detecting lane line | |
CN111222441B (en) | Point cloud target detection and blind area target detection method and system based on vehicle-road cooperation | |
CN104183142A (en) | Traffic flow statistics method based on image visual processing technology | |
CN110490150A (en) | A kind of automatic auditing system of picture violating the regulations and method based on vehicle retrieval | |
CN111274886A (en) | Deep learning-based pedestrian red light violation analysis method and system | |
CN111179220B (en) | Lane mark line quality detection method, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |