CN108460323A - A kind of backsight blind area vehicle checking method of fusion vehicle mounted guidance information - Google Patents

A kind of backsight blind area vehicle checking method of fusion vehicle mounted guidance information Download PDF

Info

Publication number
CN108460323A
CN108460323A CN201711478171.5A CN201711478171A CN108460323A CN 108460323 A CN108460323 A CN 108460323A CN 201711478171 A CN201711478171 A CN 201711478171A CN 108460323 A CN108460323 A CN 108460323A
Authority
CN
China
Prior art keywords
detection
target
backsight
model
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711478171.5A
Other languages
Chinese (zh)
Other versions
CN108460323B (en
Inventor
王小刚
倪如金
卢金波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Desay SV Automotive Co Ltd
Original Assignee
Huizhou Desay SV Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Desay SV Automotive Co Ltd filed Critical Huizhou Desay SV Automotive Co Ltd
Priority to CN201711478171.5A priority Critical patent/CN108460323B/en
Publication of CN108460323A publication Critical patent/CN108460323A/en
Application granted granted Critical
Publication of CN108460323B publication Critical patent/CN108460323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The backsight blind area vehicle checking method of the fusion vehicle mounted guidance information of the application provides the solution merged with vehicle detecting algorithm for the vehicle mounted guidance of backsight blind area, the scene information and weather conditions that navigation provides, vehicle detecting algorithm selects different model combinations and parameter according to different environment self-adaptions, so that vehicle detecting algorithm preferably adapts to situation complicated and changeable, have higher accuracy of detection and efficiency, can preferably be applied to Automobile Electronic Industry.

Description

A kind of backsight blind area vehicle checking method of fusion vehicle mounted guidance information
Technical field
This application involves a kind of blind area vehicle checking methods, specifically, belong to a kind of backsight of fusion vehicle mounted guidance information Blind area vehicle checking method.
Background technology
With increasing rapidly for car ownership, automobile driving safe increasingly thirsts for technology by universal concern, people The safety brought with it is convenient.Therefore, automobile ADAS systems are furtherd investigate, and are widely used in Automobile Electronic Industry, are developed into For the core technology of automotive electronics.Since machine vision can clearly capture the information around vehicle body, to the color of object There is preferable analytic ability with information such as textures, can effectively identify that vehicle, pedestrian, traffic police around vehicle body etc. etc. has Huge advantage, therefore, intelligent vision module is applied on automobile, is the solution party of the great competitiveness of current driving assistance system Case has huge market prospects.However, in complex scene and Changes in weather, the intractability of vision algorithm, shadow can be increased Ring overall performance index.
Invention content
The present invention is at least one defect overcome described in the above-mentioned prior art, provides a kind of fusion vehicle mounted guidance information Backsight blind area vehicle checking method.
The present invention is directed to solve above-mentioned technical problem at least to a certain extent.
The primary and foremost purpose of the present invention is to improve accuracy of detection and efficiency.
In order to solve the above technical problems, technical scheme is as follows:
A kind of backsight blind area vehicle checking method of fusion vehicle mounted guidance information, includes the following steps:
S1, detection start;
S2, it receives navigation information and passes through pretreated backsight image and carry out model initialization;
S3, carry out model selection and combine;
S4, it carries out the detection of primary layer sub-pixel using the model I combined and judges whether there is target, terminate this if no target Secondary detection, then carries out step S5 if any target, the model I be using Pixel-level feature train come model;
S5, it carries out intermediate level edge detection using the model II combined and judges whether there is target, terminate if no target The secondary detection, then carries out step S6 if any target, the model II be using edge feature train come model;
S6, it carries out advanced hierarchically structured detection using the model III combined and judges whether there is target, tied if no target The beam secondary detection, then carries out step S7 if any target, the model III be using the edge feature of combination train come mould Type;
S7, by after detection information and navigation information carry out data fusion;
S8, detection terminate.
Further, the pretreatment of backsight image includes the following steps in the step S2:
S21, the input of original backsight image data, original backsight image data packet is containing left side and right side data information;
S22, it is provided to generate the range of interest in image according to calibrating parameters and practical blind area demand, and calculates each picture number The range-azimuth at strong point and camera;
S23, adaptive aberration correction algorithm is executed:Composite calibration parameter, image data point, the range-azimuth with camera, it is complete At the mapping process of each point in range of interest;
S24, view transformation algorithm is executed:Pass through image transformation matrix so that the range of interest data observed are at Best detecting state;
S25, final image to be detected is obtained, is sent into detection module and is detected.
Further, in the step S4, the purpose of primary layer sub-pixel detection is the candidate regions where filtering out vehicle Domain, the feature of selection are exactly the luminance information inside the channels Y, specifically include following steps:
S41, the input by the backsight subvolume of interest image block in pretreated backsight image as model;
S42, pyramid level is established:The detection of target under different sizes is completed, from top to bottom, from left to right traversal can to search for The target location of energy;
S43, for the location of pixels (x, y, width, height) in i-th layer of pyramid, make decisions ratio with model data Compared with, and obtain corresponding score;
S44, in conjunction with the score on each position on pyramids at different levels, be normalized to 0-255, obtain probability distribution Figure;
S45, for probability distribution graph be filtered and connected region divide, it is therefore an objective to smoothed image, remove noise, obtain it is each Subregion;
S46, it is suitably extended out for candidate subregion, and according to score ranked candidate subregion, decision goes out candidate region Detect priority.
Further, in the step S5, the data of intermediate level edge detection process are that the detection of primary layer sub-pixel is defeated The candidate region frame gone out, selection are characterized as edge gradient information, carry out decision and judge to obtain the sub-block where each target.
Further, in the step S6, advanced hierarchically structured detection is the base exported in intermediate level edge detection On plinth, each sub-block is confirmed, on the basis of ensureing target effective detection, removes false-alarm, selection is characterized as structuring Feature, the feature of intermediate level edge detection is specifically weighted combination.
Further, in the step S3, model selection is that optimal mould is selected according to the environmental information in navigation information Type and the parameter to match.
Further, navigation information includes road information, scene information, weather information.
Further, navigation information includes the information data for being related to highway, urban road, frontlighting, backlight, tunnel.
Further, the application provides a kind of backsight blind area vehicle using the fusion vehicle mounted guidance information as described in aforementioned The detecting system of detection method, including navigation system, Model selection module, model composite module, detection module, Fusion Module.
Compared with prior art, the advantageous effect of technical solution of the present invention is:The present invention proposes vehicle mounted guidance and rear ablepsia The fusion solution of area's vehicle detection selects different models and parameter for varying environment factor, and carries out in each level Model combination is carried out, the detection of target is more advantageous to, improves recall rate, reduce false-alarm.The present invention can be carried out in test side Successively other target detection process, primary-level offer Pixel-level another characteristic detection, the candidate region of acquisition is for middle-level The detection of edge level another characteristic, the candidate region obtained on this basis is applied to high-level structured features detection, defeated The information gone out is merged to obtain final as a result, this method can more effectively detect target.
Description of the drawings
Fig. 1 is detection system structure.
Fig. 2 is the backsight blind area vehicle checking method flow diagram for merging vehicle mounted guidance information.
Fig. 3 is backsight image pretreatment process schematic diagram.
Fig. 4 is primary layer sub-pixel detection process schematic diagram.
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;It is attached in order to more preferably illustrate the present embodiment Scheme certain components to have omission, zoom in or out, does not represent the size of actual product;To those skilled in the art, The omitting of some known structures and their instructions in the attached drawings are understandable;Same or analogous label corresponds to same or similar Component;The terms describing the positional relationship in the drawings are only for illustration, should not be understood as the limitation to this patent.
Specific implementation mode
The following further describes the technical solution of the present invention with reference to the accompanying drawings and examples.
Embodiment 1
Referring to attached drawing, the backsight blind area vehicle checking method of the fusion vehicle mounted guidance information of the application includes the following steps:
S1, detection start;
S2, it receives navigation information and passes through pretreated backsight image and carry out model initialization;
S3, carry out model selection and combine;
S4, it carries out the detection of primary layer sub-pixel using the model combined and judges whether there is target, terminate this if no target Secondary detection, then carries out step S5 if any target, the model I be using Pixel-level feature train come model;
S5, it carries out intermediate level edge detection using the model combined and judges whether there is target, terminate this if no target Secondary detection, then carries out step S6 if any target, the model II be using edge feature train come model;
S6, it carries out advanced hierarchically structured detection using the model combined and judges whether there is target, terminate if no target The secondary detection, then carries out step S7 if any target, the model III be using the edge feature of combination train come model;
S7, by after detection information and navigation information carry out data fusion;
S8, detection terminate.
Embodiment 2
The present embodiment is similar to embodiment 1, since the lens type used when acquisition backsight image is fisheye camera lenses, advantage Be observation angular field of view it is very big, the data of acquisition are more;But disadvantage is it is obvious that there are larger distortion, especially distant place In backsight blind area, vehicle is big in image distortion, and torsional deformation is serious, influences the detection and identification of target, needs to be corrected and become Change processing, therefore further, in the step S2 pretreatment of backsight image include the following steps:
S21, the input of original backsight image data, original backsight image data packet is containing left side and right side data information;
S22, it is provided to generate the range of interest in image according to calibrating parameters and practical blind area demand, and calculates each picture number The range-azimuth at strong point and camera;
S23, adaptive aberration correction algorithm is executed:Composite calibration parameter, image data point, the range-azimuth with camera, it is complete At the mapping process of each point in range of interest;
S24, view transformation algorithm is executed:Pass through image transformation matrix so that the range of interest data observed are at Best detecting state;
S25, final image to be detected is obtained, is sent into detection module and is detected.
Embodiment 3
The present embodiment is similar to embodiment 1-2, and further, in the step S4, the purpose of primary layer sub-pixel detection is sieve The candidate region where vehicle is selected, the feature of selection is exactly the luminance information inside the channels Y, specifically includes following steps:
S41, the input by the backsight subvolume of interest image block in pretreated backsight image as model;
S42, pyramid level is established:The detection of target under different sizes is completed, from top to bottom, from left to right traversal can to search for The target location of energy;
S43, for the location of pixels (x, y, width, height) in i-th layer of pyramid, make decisions ratio with model data Compared with, and obtain corresponding score;
S44, in conjunction with the score on each position on pyramids at different levels, be normalized to 0-255, obtain probability distribution Figure;
S45, for probability distribution graph be filtered and connected region divide, it is therefore an objective to smoothed image, remove noise, obtain it is each Subregion;
S46, it is suitably extended out for candidate subregion, and according to score ranked candidate subregion, decision goes out candidate region Detect priority.
In step S5, the data of intermediate level edge detection process are the candidate regions of primary layer sub-pixel detection output Frame, selection are characterized as edge gradient information, carry out decision and judge to obtain the sub-block where each target.
In step S6, advanced hierarchically structured detection is on the basis of intermediate level edge detection exports, to each height Block is confirmed, on the basis of ensureing target effective detection, removes false-alarm, selection is characterized as the feature of structuring, specifically The feature of intermediate level edge detection is weighted combination.
Primary layer sub-pixel detection design purpose is acquisition candidate region, and the feature of selection is exactly simple pixel value, Fast and easy filters out nontarget area, and retains effective target;The purpose of intermediate level edge detection is special using edge gradient Sign, searches out the position where target;The purpose of advanced hierarchically structured detection design is removal false target, and what is utilized is middle rank The weighted array of the feature of level edge detection extraction.
Embodiment 4
The present embodiment is similar to embodiment 1-3, and further, in the step S3, model selection is according in navigation information The parameter that environmental information selects optimal model and matches.
Navigation information includes road information, scene information, weather information.
Navigation information includes the information data for being related to highway, urban road, frontlighting, backlight, tunnel.
Embodiment 5
The application provides a kind of inspection of the backsight blind area vehicle checking method using the fusion vehicle mounted guidance information as described in aforementioned Examining system, including navigation system, Model selection module, model composite module, detection module, Fusion Module.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this All any modification, equivalent and improvement etc., should be included in the claims in the present invention made by within the spirit and principle of invention Protection domain within.

Claims (8)

1. a kind of backsight blind area vehicle checking method of fusion vehicle mounted guidance information, it is characterised in that:Include the following steps:
S1, detection start;
S2, it receives navigation information and passes through pretreated backsight image and carry out model initialization;
S3, carry out model selection and combine;
S4, it carries out the detection of primary layer sub-pixel using [CSA (1] I and judges whether there is target, terminate the secondary inspection if no target Survey, then carry out step S5 if any target, the model I be using Pixel-level feature train come model;
S5, it carries out intermediate level edge detection using the model II combined and judges whether there is target, terminate if no target The secondary detection, then carries out step S6 if any target, the model II be using edge feature train come model;
S6, it carries out advanced hierarchically structured detection using the model III combined and judges whether there is target, tied if no target The beam secondary detection, then carries out step S7 if any target, the model III be using the edge feature of combination train come mould Type;
S7, by after detection information and navigation information carry out data fusion;
S8, detection terminate.
2. the backsight blind area vehicle checking method of fusion vehicle mounted guidance information according to claim 1, it is characterised in that:Institute It states in step S2, the pretreatment of backsight image includes the following steps:
S21, the input of original backsight image data, original backsight image data packet is containing left side and right side backsight image data information;
S22, it is provided to generate the range of interest in image according to calibrating parameters and practical blind area demand, and calculates each picture number The range-azimuth at strong point and camera;
S23, adaptive aberration correction algorithm is executed:Composite calibration parameter, image data point, the range-azimuth with camera, it is complete At the mapping process of each point in range of interest;
S24, view transformation algorithm is executed:Pass through image transformation matrix so that the range of interest data observed are at Best detecting state;
S25, final image to be detected is obtained, is sent into detection module and is detected.
3. the backsight blind area vehicle checking method of fusion vehicle mounted guidance information according to claim 1 or 2, feature exist In:In the step S4, the purpose of primary layer sub-pixel detection is the candidate region where filtering out vehicle, and the feature of selection is just It is the luminance information inside the channels Y, specifically includes following steps:
S41, the input by the backsight subvolume of interest image block in pretreated backsight image as model;
S42, pyramid level is established:The detection of target under different sizes is completed, from top to bottom, from left to right traversal can to search for The target location of energy;
S43, for the location of pixels (x, y, width, height) in i-th layer of pyramid, make decisions ratio with model data Compared with, and obtain corresponding score;
S4 4, in conjunction with the score on each position on pyramids at different levels, score is normalized to 0-255, is obtained Probability distribution graph;
S45, for probability distribution graph be filtered and connected region divide, it is therefore an objective to smoothed image, remove noise, obtain it is each Subregion;
S46, it is suitably extended out for candidate subregion, and according to score ranked candidate subregion, decision goes out candidate region Detect priority.
4. the backsight blind area vehicle checking method of fusion vehicle mounted guidance information according to claim 3, it is characterised in that:Institute It states in step S5, the data of intermediate level edge detection process are the candidate region frame of primary layer sub-pixel detection output, selection It is characterized as that edge gradient information, intermediate level edge detection carry out decision and judges to obtain the sub-block where each target.
5. the backsight blind area vehicle checking method of fusion vehicle mounted guidance information according to claim 4, it is characterised in that:Institute It states in step S6, advanced hierarchically structured detection is carried out to each sub-block on the basis of intermediate level edge detection exports Confirm, on the basis of ensureing target effective detection, removes false-alarm, selection is characterized as the feature of structuring, specifically will be intermediate The feature of level edge detection is weighted combination.
6. the backsight blind area vehicle checking method of fusion vehicle mounted guidance information according to claim 1, it is characterised in that:Institute It states in step S3, model selection is the parameter for selecting optimal model according to the environmental information in navigation information and matching.
7. the backsight blind area vehicle checking method of fusion vehicle mounted guidance information according to claim 6, it is characterised in that:It leads Information of navigating includes road information, scene information, weather information.
8. the backsight blind area vehicle checking method of fusion vehicle mounted guidance information according to claim 7, it is characterised in that:It leads Information of navigating includes being related to the information data in highway, urban road, frontlighting, backlight, tunnel.
CN201711478171.5A 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information Active CN108460323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711478171.5A CN108460323B (en) 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711478171.5A CN108460323B (en) 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Publications (2)

Publication Number Publication Date
CN108460323A true CN108460323A (en) 2018-08-28
CN108460323B CN108460323B (en) 2022-05-20

Family

ID=63221219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711478171.5A Active CN108460323B (en) 2017-12-29 2017-12-29 Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Country Status (1)

Country Link
CN (1) CN108460323B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312386A1 (en) * 2009-06-04 2010-12-09 Microsoft Corporation Topological-based localization and navigation
US20110081087A1 (en) * 2009-10-02 2011-04-07 Moore Darnell J Fast Hysteresis Thresholding in Canny Edge Detection
CN105512115A (en) * 2014-09-22 2016-04-20 惠州市德赛西威汽车电子股份有限公司 Vehicle navigation picture processing method
CN106529530A (en) * 2016-10-28 2017-03-22 上海大学 Monocular vision-based ahead vehicle detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312386A1 (en) * 2009-06-04 2010-12-09 Microsoft Corporation Topological-based localization and navigation
US20110081087A1 (en) * 2009-10-02 2011-04-07 Moore Darnell J Fast Hysteresis Thresholding in Canny Edge Detection
CN105512115A (en) * 2014-09-22 2016-04-20 惠州市德赛西威汽车电子股份有限公司 Vehicle navigation picture processing method
CN106529530A (en) * 2016-10-28 2017-03-22 上海大学 Monocular vision-based ahead vehicle detection method

Also Published As

Publication number Publication date
CN108460323B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN107392103B (en) Method and device for detecting road lane line and electronic equipment
CN109460709B (en) RTG visual barrier detection method based on RGB and D information fusion
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
CN111738314B (en) Deep learning method of multi-modal image visibility detection model based on shallow fusion
CN107590470B (en) Lane line detection method and device
CN109034047A (en) A kind of method for detecting lane lines and device
CN110910378B (en) Bimodal image visibility detection method based on depth fusion network
JP5631581B2 (en) Road recognition device
JP2013225289A (en) Multi-lens camera apparatus and vehicle including the same
CN111209780A (en) Lane line attribute detection method and device, electronic device and readable storage medium
JP5180126B2 (en) Road recognition device
JP2008158958A (en) Road surface determination method and road surface determination device
CN106951898B (en) Vehicle candidate area recommendation method and system and electronic equipment
CN112215306A (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN104902261A (en) Device and method for road surface identification in low-definition video streaming
JP5188429B2 (en) Environment recognition device
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111144301A (en) Road pavement defect quick early warning device based on degree of depth learning
CN107220632B (en) Road surface image segmentation method based on normal characteristic
CN112464914A (en) Guardrail segmentation method based on convolutional neural network
JP2014106739A (en) In-vehicle image processing device
KR20120098292A (en) Method for detecting traffic lane
JP5189556B2 (en) Lane detection device
JP5091897B2 (en) Stop line detector
CN112669615A (en) Parking space detection method and system based on camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant