CN109508637A - Embedded real-time vehicle detection method and system - Google Patents

Embedded real-time vehicle detection method and system Download PDF

Info

Publication number
CN109508637A
CN109508637A CN201811177185.8A CN201811177185A CN109508637A CN 109508637 A CN109508637 A CN 109508637A CN 201811177185 A CN201811177185 A CN 201811177185A CN 109508637 A CN109508637 A CN 109508637A
Authority
CN
China
Prior art keywords
hacures
image
vehicle
time
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811177185.8A
Other languages
Chinese (zh)
Inventor
牟华英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eagle Vision Corp Ltd
Original Assignee
Eagle Vision Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eagle Vision Corp Ltd filed Critical Eagle Vision Corp Ltd
Priority to CN201811177185.8A priority Critical patent/CN109508637A/en
Publication of CN109508637A publication Critical patent/CN109508637A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to field of image processings, are related to a kind of embedded real-time vehicle detection method, comprising: obtain real-time pavement image using monocular cam;Binary conversion treatment is carried out to image according to road surface gray value;Find out the hacures for meeting preset condition in image;The hacures in image finally obtained according to step 3 obtain the vertical difference information of image;According to vertical difference information acquisition potential vehicle rectangle frame;Classify to potential vehicle rectangle frame, is kept score highest potential vehicle rectangle frame according to non-maxima suppression algorithm.The present invention also proposes a kind of embedded real-time vehicle detection system.The present invention is by handling image binaryzation, hacures are rejected and difference information obtains and determines potential vehicle rectangle frame, and classification and non-maxima suppression is recycled to obtain vehicle rectangle frame, realizes the real-time vehicle detection under the conditions of embedded environment limited resources.

Description

Embedded real-time vehicle detection method and system
Technical field
The present invention relates to field of image processing, in particular to a kind of embedded real-time vehicle detection method and system.
Background technique
Along with the progress of urbanization and universal, the traffic accident sharp increase of automobile, a large amount of personnel's wound is caused Harmful and economic loss.How to ensure safe, quick traveling of the vehicle on highway, avoids the hair of traffic accidents such as knocking into the back It is raw, become the important topic of automotive field.And traditional passive security far can not avoid the accident of traffic from occurring, automobile The concept of active safety is come into being, and the Foregut fermenters on road have become a heat of automobile active safety technical research Point.Visual sensing due to having the characteristics that contain much information, it is low in cost, have a wide range of applications in field of automotive active safety.
Vehicle testing techniques refer to using image sensing means to the vehicle search and judgement in image, obtain vehicle in image A variety of attributes (such as position, speed, shape, appearance) process.
Currently, in the dynamic environment that automobile travels this complexity on road, due to the week by illumination variation, object The movement of phase property, the influence of dynamic texture in nature, it is very difficult for accurately extracting interested moving target.It is existing Some algorithms can only often be directed to a certain special application scenarios, and calculating is mostly more complicated, is unable to satisfy reality Using the requirement to system real time.Especially embedded resource is limited, to realize real-time vehicle inspection under embedded environment It is bigger to survey difficulty.
Summary of the invention
Embodiments of the present invention aim to solve at least one of the technical problems existing in the prior art.For this purpose, of the invention Embodiment need to provide a kind of embedded real-time vehicle detection method and system.
A kind of embedded real-time vehicle detection method of embodiment of the present invention characterized by comprising
Step 1, real-time pavement image is obtained using monocular cam;
Step 2, binary conversion treatment is carried out to image according to road surface gray value;
Step 3, the hacures for meeting preset condition in image are found out;
Step 4, the hacures in image finally obtained according to step 3 obtain the vertical difference information of image;
Step 5, according to vertical difference information acquisition potential vehicle rectangle frame;
Step 6, classify to potential vehicle rectangle frame, kept score highest potential vehicle according to non-maxima suppression algorithm Rectangle frame.
In a kind of embodiment, step 2 includes: that the pixel gray value for being lower than binarization threshold in image is set as 0, greatly 255 are set as to carry out the binary conversion treatment of image in or equal to the pixel gray value of binarization threshold;Wherein, binarization threshold For road surface gray value in image.
In a kind of embodiment, step 3 includes:
Step 31, all hacures on image are labeled as [xl, xr, down];Wherein, hacures are horizontal linear, Xl indicates that the left side coordinate of hacures, xr indicate that the right coordinate of hacures, down indicate the ordinate of hacures;
Step 32, the removal of first time hacures is carried out by first condition;Wherein, first condition are as follows: all width are less than pre- If the hacures of width threshold value cannot retain;
Step 33, it calculates and obtains minimum vehicle width wt;
Step 34, second of hacures removal is carried out by second condition;Wherein, second condition are as follows: if hacures Width xr-rl is less than minimum vehicle width wt, which cannot be retained;
Step 35, the removal of third time hacures is carried out by third condition;Wherein, third condition are as follows: abscissa have intersection and Longest hacures can be retained in longitudinal 3 pixels up and down in intersection length/union length > 0.5.
In a kind of embodiment, step 4 includes: the hacures in the image finally obtained according to step 3, with mid_x= (xl+xr)/2 the midpoint abscissa for indicating hacures, the ordinate of vehicle upper end is indicated with ux=down- (xr-xl) * k1, Left side difference maximum value row number xlmax is found in [xl, ux, mid_x, down], is found in [mid_x, ux, xr, down] right Side difference maximum value row number xrmax, k1 indicate preset design factor.
In a kind of embodiment, step 5 includes: to be arranged according to left side difference maximum value row number xlmax, the right difference maximum value Number xrmax and vehicle upper end u obtains potential vehicle rectangle frame;Wherein, u=down- (xr-xl) * k2, k2 indicate preset calculating Coefficient, 0.9≤k2≤1.1.
Embodiment of the present invention also proposes a kind of embedded real-time vehicle detection system characterized by comprising
Module is obtained, for using monocular cam to obtain real-time pavement image;
Binarization block, for carrying out binary conversion treatment to image according to road surface gray value;
Shade wire module, for finding out the hacures for meeting preset condition in image;
Difference block, the hacures in image for being finally obtained according to shade wire module obtain the vertical difference of image Information;
Rectangle frame module, for according to vertical difference information acquisition potential vehicle rectangle frame;
Categorization module keeps score highest for classifying to potential vehicle rectangle frame according to non-maxima suppression algorithm Potential vehicle rectangle frame.
In a kind of embodiment, binarization block is specifically used for that the pixel gray value of binarization threshold will be lower than in image It is set as 0, the pixel gray value more than or equal to binarization threshold is set as 255 and carries out the binary conversion treatment of image;Wherein, two Value threshold value is road surface gray value in image.
In a kind of embodiment, shade wire module includes:
Marking unit, for all hacures on image to be labeled as [xl, xr, down];Wherein, hacures are level Straight line, xl indicate that the left side coordinate of hacures, xr indicate that the right coordinate of hacures, down indicate the ordinate of hacures;
First removal unit, for carrying out the removal of first time hacures by first condition;Wherein, first condition are as follows: all The hacures that width is less than predetermined width threshold value cannot retain;
Computing unit obtains minimum vehicle width wt for calculating;
Second removal unit, for carrying out second of hacures removal by second condition;Wherein, second condition are as follows: if one The width xr-rl of hacures is less than minimum vehicle width wt, which cannot be retained;
Third removal unit, for carrying out the removal of third time hacures by third condition;Wherein, third condition are as follows: horizontal seat Indicating longest hacures in longitudinal 3 pixels up and down of intersection and intersection length/union length > 0.5 can be retained.
In a kind of embodiment, difference block, specifically for the shade in the image that is finally obtained according to shade wire module Line indicates the midpoint abscissa of hacures with mid_x=(xl+xr)/2, indicates vehicle upper end with ux=down- (xr-xl) * k1 Ordinate, left side difference maximum value row number xlmax is found in [xl, ux, mid_x, down], [mid_x, ux, xr, Down] in find the right difference maximum value row number xrmax, k1 indicates preset design factor.
In a kind of embodiment, rectangle frame module, it is poor according to left side difference maximum value row number xlmax, the right to be specifically used for Maximum value row number xrmax and vehicle upper end u is divided to obtain potential vehicle rectangle frame;Wherein, u=down- (xr-xl) * k2, k2 are indicated Preset design factor, 0.9≤k2≤1.1.
The embedded real-time vehicle detection method and system of embodiment of the present invention, by handling image binaryzation, yin Hachure is rejected and difference information obtains and determines potential vehicle rectangle frame, and classification and non-maxima suppression is recycled to obtain vehicle rectangle Frame realizes the real-time vehicle detection under the conditions of embedded environment limited resources.
The advantages of additional aspect of the invention, will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
The above-mentioned and/or additional aspect and advantage of embodiments of the present invention are from combination following accompanying drawings to embodiment It will be apparent and be readily appreciated that in description, in which:
Fig. 1 is the flow diagram of the embedded real-time vehicle detection method of embodiment of the present invention;
Fig. 2 is the composition schematic diagram of the embedded real-time vehicle detection system of embodiment of the present invention;
Fig. 3 is the schematic diagram that embodiment of the present invention obtains real-time highway pavement image;
Fig. 4 is the schematic diagram that embodiment of the present invention carries out binary conversion treatment to image;
Fig. 5 is the schematic diagram that embodiment of the present invention carries out the removal of first time hacures to image;
Fig. 6 is the schematic diagram that embodiment of the present invention calculates minimum vehicle width;
Fig. 7 is the schematic diagram that embodiment of the present invention carries out second of hacures removal to image;
Fig. 8 is the schematic diagram that embodiment of the present invention carries out the removal of third time hacures to image;
Fig. 9 is the schematic diagram that embodiment of the present invention indicates potential vehicle rectangle frame in the picture;
Figure 10 is the training sample picture schematic diagram that embodiment of the present invention uses cascade network training.
Specific embodiment
Embodiments of the present invention are described below in detail, the example of embodiment is shown in the accompanying drawings, wherein identical or class As label indicate same or similar element or element with the same or similar functions from beginning to end.Below with reference to attached The embodiment of figure description is exemplary, and can only be used to explain embodiments of the present invention, and should not be understood as to the present invention Embodiment limitation.
For computer, a sub-picture is made of pixel.And for gray level image, the picture of gray level image Prime number evidence is exactly a matrix, the height (unit is pixel) of the row correspondence image of matrix, the width (unit of matrix column correspondence image For pixel), the value of the pixel of the element correspondence image of matrix, matrix element is exactly the gray value of pixel.
Referring to Fig. 1, the embedded real-time vehicle detection method of embodiment of the present invention, comprising:
Step 1, real-time pavement image is obtained using monocular cam.
Step 2, binary conversion treatment is carried out to image according to road surface gray value.
Step 3, the hacures for meeting preset condition in image are found out.
Step 4, the hacures in image finally obtained according to step 3 obtain the vertical difference information of image.
Step 5, according to vertical difference information acquisition potential vehicle rectangle frame.
Step 6, classify to potential vehicle rectangle frame, kept score highest potential vehicle according to non-maxima suppression algorithm Rectangle frame.
Referring to Fig. 2, the embedded real-time vehicle detection system of embodiment of the present invention, comprising: obtain module, binaryzation Module, shade wire module, difference block, rectangle frame module and categorization module;Wherein, modules are described below:
Module is obtained, for using monocular cam to obtain real-time pavement image.
Binarization block, for carrying out binary conversion treatment to image according to road surface gray value.
Shade wire module, for finding out the hacures for meeting preset condition in image.
Difference block, the hacures in image for being finally obtained according to shade wire module obtain the vertical difference of image Information.
Rectangle frame module, for according to vertical difference information acquisition potential vehicle rectangle frame.
Categorization module keeps score highest for classifying to potential vehicle rectangle frame according to non-maxima suppression algorithm Potential vehicle rectangle frame.
In this embodiment, embedded real-time vehicle detection method is using embedded real-time vehicle detection system as step Execution object, or the execution object using the modules in system as step.Specifically, step 1 is to obtain module work For the execution object of step, execution object of the step 2 using binarization block as step, step 3 is using shade wire module as step Rapid execution object, execution object of the step 4 using difference block as step, step 5 holding using rectangle frame module as step Row object, execution object of the step 6 using categorization module as step.
In step 1, real-time highway pavement image is obtained using monocular cam as shown in figure 3, obtaining module, usually existed In the image got, same pixel accounting it is most be highway pavement pixel, i.e. the most Zhi Wei highway of gray value accounting This gray value is denoted as pix_most by road surface gray value.
The image of acquisition uses resolution ratio for the gray level image of 512*360, road surface end point it is known that be denoted as vp (xv, yv), Vehicle bottom end minimum widith is denoted as min_w, due to camera calibration it is good after, each pixel and real space position can correspond to Come, so the width of vehicle will not be less than in image, such as real space width represented by 1/3 pixel of bottom, then just This width is denoted as min_w.
In step 2, binarization block carries out binaryzation to image, exactly sets the gray value of the pixel on image to 0 or 255, such whole image, which shows, significantly only has black and white visual effect.
Specifically, step 2 includes: that the pixel gray value for being lower than binarization threshold in image is set as 0, is greater than or equal to The pixel gray value of binarization threshold is set as 255 to carry out the binary conversion treatment of image;Wherein, binarization threshold is in image Road surface gray value.
Corresponding the, binarization block is specifically used for that binaryzation threshold will be lower than in image in embedded real-time vehicle detection system The pixel gray value of value is set as 0, and the pixel gray value more than or equal to binarization threshold is set as 255 and carries out the two of image Value processing;Wherein, binarization threshold is road surface gray value in image.
The gray value pix_most that step 1 is obtained will be less than binarization threshold as binarization threshold th_road Pixel gray value is set as 0, and the pixel more than or equal to binarization threshold is set as 255, the purpose for the arrangement is that in order to obtain The shade of vehicle tail, following Fig. 4 of image after binaryzation.
In step 3, specifically include:
Step 31, all hacures on image are labeled as [xl, xr, down];Wherein, hacures are horizontal linear, Xl indicates that the left side coordinate of hacures, xr indicate that the right coordinate of hacures, down indicate the ordinate of hacures.
Step 32, the removal of first time hacures is carried out by first condition;Wherein, first condition are as follows: all width are less than pre- If the hacures of width threshold value cannot retain.
Step 33, it calculates and obtains minimum vehicle width wt.
Step 34, second of hacures removal is carried out by second condition;Wherein, second condition are as follows: if hacures Width xr-rl is less than minimum vehicle width wt, which cannot be retained.
Step 35, the removal of third time hacures is carried out by third condition;Wherein, third condition are as follows: abscissa have intersection and Longest hacures can be retained in longitudinal 3 pixels up and down in intersection length/union length > 0.5.
Accordingly, shade wire module includes: in embedded real-time vehicle detection system
Marking unit, for all hacures on image to be labeled as [xl, xr, down];Wherein, hacures are level Straight line, xl indicate that the left side coordinate of hacures, xr indicate that the right coordinate of hacures, down indicate the ordinate of hacures.
First removal unit, for carrying out the removal of first time hacures by first condition;Wherein, first condition are as follows: all The hacures that width is less than predetermined width threshold value cannot retain.
Computing unit obtains minimum vehicle width wt for calculating.
Second removal unit, for carrying out second of hacures removal by second condition;Wherein, second condition are as follows: if one The width xr-rl of hacures is less than minimum vehicle width wt, which cannot be retained.
Third removal unit, for carrying out the removal of third time hacures by third condition;Wherein, third condition are as follows: horizontal seat Indicating longest hacures in longitudinal 3 pixels up and down of intersection and intersection length/union length > 0.5 can be retained.
Step 31 to step 35 can execution object by shade wire module as step, can also be by each in module Execution object of the unit as step.Specifically, execution object of the step 31 by marking unit as step, step 32 is by first Execution object of the removal unit as step, step 33 is by computing unit as the execution object as step, and step 34 is by Execution object of two removal units as step, execution object of the step 35 by third removal unit as step.
In step 31, marking unit find the continuous continual gray value on same ordinate be 255 point, as one Horizontal linear, xl indicate that the left side coordinate of hacures, xr indicate that the right coordinate of hacures, down indicate the vertical of hacures Coordinate.In this way, any one hacures on the image after binary conversion treatment can be denoted as [xl, xr, down] by marking unit.
In step 32, after finding all hacures, the first removal unit removes hacures of all width less than 12.This behaviour Make that the discrete point and noise spot on image can be removed.Wherein, 12 be preset width threshold value.Effect such as Fig. 5 institute after removal Show.
In step 33, computing unit will find out width of each hacures in real space, as shown in fig. 6, according to similar The property of triangle, vp (xv, yv) are the end point on kilometer road surface, and h is the height of image.Then most trolley bottom width degree is mapped in not It can be calculated by wt=min_w* (down-yv)/(h-yv) with width when ordinate height, wherein h-yv is that end point is high Degree, down-yv are height of the ordinate to end point, and min_w assumes that width of the most narrow vehicle in bottommost.In this way Minimum vehicle width can be calculated under the ordinate.
In step 34, the width of every hacures can be obtained with xr-rl, and wherein xl is the left side abscissa of hacures, xr For the right abscissa of hacures.As shown in fig. 7, that is, the width of the hacures is less than minimum vehicle width if xr-xl < wt, Second removal unit thinks that it is not vehicle, is excluded.
In step 35, in order to exclude extra repetition hacures, as shown in figure 8, third removal unit selection abscissa has friendship Collection, and longest hacures retain in longitudinal 3 pixels up and down in intersection length/union length > 0.5, other hacures are picked It removes.
In step 4, the hacures in image that difference block is finally obtained according to shade wire module, with mid_x=(xl+ Xr the midpoint abscissa for)/2 indicating hacures, the upper end for seeking difference value, lower vehicle are indicated with ux=down- (xr-xl) * k1 Both sides edge is more clear.Wherein, k1 indicates preset design factor, passes through many experiments k1 value 0.3 in present embodiment Effect is preferable.Left side difference maximum value row number xlmax is found in [xl, ux, mid_x, down], [mid_x, ux, xr, Down] in find the right difference maximum value row number xrmax.
That is the latter column of difference block image array subtract previous column, the information of available variation of image grayscale, by In image border, grey scale change is larger, can extract image border by grey scale change in this way.Image edge is ash The bigger column in the region namely difference absolute value that angle value changes greatly.
In step 5, as shown in figure 9, rectangle frame module is maximum according to left side difference maximum value row number xlmax, the right difference It is worth row number xrmax and vehicle upper end ux and obtains potential vehicle rectangle frame;Wherein, ux=down- (xr-xl) * k2, k2 indicate default Design factor.
In the present embodiment, the value of k2 is 0.9~1.1.
It can be ux=down- (xr-xl) * 0.9, be also possible to ux=down- (xr-xl) * 1.1.
In present embodiment, by many experiments, the value of k2=0.92 is that the multiple effect of experiment is preferably worth, i.e., according to Ux=down- (xr-xl) * 0.92 can get preferably effect.
In step 6, classify to potential vehicle rectangle frame, each rectangle frame has a score, using NMS (non Maximum suppression, Chinese name non-maxima suppression) algorithm is come the highest potential vehicle rectangle frame that keeps score.
In order to sufficiently improve real-time, 2 cnn cascade networks of training, image resolution ratio is A (18X18), B respectively (40X40).Mainly lower calculation amount with the effects of two networks, time of A network since picture is smaller, computation complexity It is low.
As shown in Figure 10, use 40,000 tailstock pictures as training sample, 10,000 pictures are as test.Each sample is random Choose 20, sample that IOU (Intersection over Union, Chinese translation are degree of overlapping) is greater than 0.8.A accuracy rate reaches Reach 99.95% to 99.53%, B accuracy rate.Correctly further B classifies for A classification.The platform used is tetra- core of ARM9, single Core dominant frequency 1GHz.In the case of all testing times are single thread, accelerate convolution with openblas, each A classification time is 2ms, B classify the time for 8ms, and single frames all detects 50-80ms, if being able to satisfy real-time requirement using multithreading.
In A network, 18 × 18 sample carries out convolution and automatically extracts feature, and the convolution kernel of convolution is wide and a height of 3 × 3, Convolution nuclear volume is for the first time 1, second 3, third time 200.Relu line rectification function: f (x)=max (0, x) is used, this The main function of function is the non-linear relation increased between each layer of neural network;Using the pond Max pooling, reduce output Size and reduction over-fitting.
In B network, 40 × 40 sample carries out convolution and automatically extracts feature, and the convolution kernel of convolution is wide and a height of 3 × 3, Convolution nuclear volume is for the first time 16, second 32, third time 64, the 4th time 240.Use Relu line rectification function: f (x)= Max (0, x), the main function of this function are the non-linear relations increased between each layer of neural network;Use the pond Max pooling Change, reduce output size and reduces over-fitting.
The embedded real-time vehicle detection method and system of embodiment of the present invention, by handling image binaryzation, yin Hachure is rejected and difference information obtains and determines potential vehicle rectangle frame, and classification and non-maxima suppression is recycled to obtain vehicle rectangle Frame realizes the real-time vehicle detection under the conditions of embedded environment limited resources.
In the description of embodiments of the present invention, it is to be understood that term " center ", " longitudinal direction ", " transverse direction ", " length Degree ", " width ", " thickness ", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", The orientation or positional relationship of the instructions such as "outside", " clockwise ", " counterclockwise " is to be based on the orientation or positional relationship shown in the drawings, only It is embodiments of the present invention and simplified description for ease of description, rather than the device or element of indication or suggestion meaning are necessary It with specific orientation, is constructed and operated in a specific orientation, therefore should not be understood as the limitation to embodiments of the present invention. In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance or imply Indicate the quantity of indicated technical characteristic." first " is defined as a result, the feature of " second " can be expressed or impliedly wrap Include one or more feature.In the description of embodiments of the present invention, the meaning of " plurality " is two or two More than, unless otherwise specifically defined.
In the description of embodiments of the present invention, it should be noted that unless otherwise clearly defined and limited, term " installation ", " connected ", " connection " shall be understood in a broad sense, for example, it may be fixedly connected, may be a detachable connection or one Connect to body;It can be mechanical connection, be also possible to be electrically connected or can mutually communicate;It can be directly connected, can also lead to It crosses intermediary to be indirectly connected, can be the connection inside two elements or the interaction relationship of two elements.For ability For the those of ordinary skill in domain, can understand as the case may be above-mentioned term in embodiments of the present invention specifically contain Justice.
In embodiments of the present invention unless specifically defined or limited otherwise, fisrt feature second feature it "upper" or "lower" may include that the first and second features directly contact, may include the first and second features be not directly to connect yet It touches but by the other characterisation contact between them.Moreover, fisrt feature second feature " on ", " top " and " on Face " includes fisrt feature right above second feature and oblique upper, or to be merely representative of first feature horizontal height special higher than second Sign.Fisrt feature include under the second feature " below ", " below " and " below " fisrt feature immediately below second feature and obliquely downward Side, or first feature horizontal height is merely representative of less than second feature.
Following disclosure provides many different embodiments or example is used to realize embodiments of the present invention not Same structure.In order to simplify the disclosure of embodiments of the present invention, hereinafter the component of specific examples and setting are described.When So, they are merely examples, and is not intended to limit the present invention.In addition, embodiments of the present invention can be in different examples Repeat reference numerals and/or reference letter in son, this repetition are for purposes of simplicity and clarity, itself not indicate to be begged for By the relationship between various embodiments and/or setting.In addition, the various specific techniques that embodiments of the present invention provide With the example of material, but those of ordinary skill in the art may be aware that the application of other techniques and/or other materials make With.
In the description of this specification, reference term " embodiment ", " some embodiments ", " schematically implementation The description of mode ", " example ", specific examples or " some examples " etc. means the tool described in conjunction with the embodiment or example Body characteristics, structure, material or feature are contained at least one embodiment or example of the invention.In the present specification, Schematic expression of the above terms are not necessarily referring to identical embodiment or example.Moreover, the specific features of description, knot Structure, material or feature can be combined in any suitable manner in any one or more embodiments or example.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processing module or other can be from instruction Execute system, device or equipment instruction fetch and the system that executes instruction) use, or combine these instruction execution systems, device or Equipment and use.For the purpose of this specification, " computer-readable medium " can be it is any may include, store, communicating, propagating or Transfer program uses for instruction execution system, device or equipment or in conjunction with these instruction execution systems, device or equipment Device.The more specific example (non-exhaustive list) of computer-readable medium include the following: there are one or more wirings Electrical connection section (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of embodiments of the present invention can be with hardware, software, firmware or their combination come real It is existing.In the above-described embodiment, multiple steps or method can be with storages in memory and by suitable instruction execution system The software or firmware of execution is realized.For example, if realized with hardware, in another embodiment, ability can be used Any one of following technology or their combination well known to domain is realized: being had for realizing logic function to data-signal The discrete logic of logic gates, the specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), field programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.In addition, in each implementation of the invention Each functional unit in example can integrate in a processing module, is also possible to each unit and physically exists alone, can also be with Two or more units are integrated in a module.Above-mentioned integrated module both can take the form of hardware realization, It can be realized in the form of software function module.If the integrated module is realized and is made in the form of software function module It is independent product when selling or using, also can store in a computer readable storage medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned Embodiment is changed, modifies, replacement and variant.

Claims (10)

1. a kind of embedded real-time vehicle detection method characterized by comprising
Step 1, real-time pavement image is obtained using monocular cam;
Step 2, binary conversion treatment is carried out to image according to road surface gray value;
Step 3, the hacures for meeting preset condition in image are found out;
Step 4, the hacures in image finally obtained according to step 3 obtain the vertical difference information of image;
Step 5, according to vertical difference information acquisition potential vehicle rectangle frame;
Step 6, classify to potential vehicle rectangle frame, kept score highest potential vehicle rectangle according to non-maxima suppression algorithm Frame.
2. embedded real-time vehicle detection method as described in claim 1, which is characterized in that step 2 includes: that will be lower than in image The pixel gray value of binarization threshold is set as 0, and the pixel gray value more than or equal to binarization threshold is set as 255 and carries out The binary conversion treatment of image;Wherein, binarization threshold is road surface gray value in image.
3. embedded real-time vehicle detection method as claimed in claim 2, which is characterized in that step 3 includes:
Step 31, all hacures on image are labeled as [xl, xr, down];Wherein, hacures are horizontal linear, xl table Show that the left side coordinate of hacures, xr indicate that the right coordinate of hacures, down indicate the ordinate of hacures;
Step 32, the removal of first time hacures is carried out by first condition;Wherein, first condition are as follows: all width are less than default width The hacures of degree threshold value cannot retain;
Step 33, it calculates and obtains minimum vehicle width wt;
Step 34, second of hacures removal is carried out by second condition;Wherein, second condition are as follows: if the width of a hacures Xr-rl is less than minimum vehicle width wt, which cannot be retained;
Step 35, the removal of third time hacures is carried out by third condition;Wherein, third condition are as follows: abscissa has intersection and intersection Longest hacures can be retained in longitudinal 3 pixels up and down in length/union length > 0.5.
4. embedded real-time vehicle detection method as claimed in claim 3, which is characterized in that step 4 include: according to step 3 most The hacures in image obtained afterwards indicate the midpoint abscissa of hacures, with mid_x=(xl+xr)/2 with ux=down- (xr-xl) * k1 indicates the ordinate of vehicle upper end, and difference maximum value row number in the left side is found in [xl, ux, mid_x, down] Xlmax, finds the right difference maximum value row number xrmax in [mid_x, ux, xr, down], and k1 indicates preset design factor.
5. embedded real-time vehicle detection method as claimed in claim 4, which is characterized in that step 5 includes: according to left side difference Maximum value row number xlmax, the right difference maximum value row number xrmax and vehicle upper end ux obtain potential vehicle rectangle frame;
Wherein, ux=down- (xr-xl) * k2, k2 indicate preset design factor, 0.9≤k2≤1.1.
6. a kind of embedded real-time vehicle detection system characterized by comprising
Module is obtained, for using monocular cam to obtain real-time pavement image;
Binarization block, for carrying out binary conversion treatment to image according to road surface gray value;
Shade wire module, for finding out the hacures for meeting preset condition in image;
Difference block, the hacures in image for being finally obtained according to shade wire module obtain the vertical difference information of image;
Rectangle frame module, for according to vertical difference information acquisition potential vehicle rectangle frame;
Categorization module keeps score highest potential for classifying to potential vehicle rectangle frame according to non-maxima suppression algorithm Vehicle rectangle frame.
7. embedded real-time vehicle detection system as claimed in claim 6, which is characterized in that binarization block is specifically used for scheme Pixel gray value as in lower than binarization threshold is set as 0, and the pixel gray value more than or equal to binarization threshold is set as 255 carry out the binary conversion treatment of image;Wherein, binarization threshold is road surface gray value in image.
8. embedded real-time vehicle detection system as claimed in claim 7, which is characterized in that shade wire module includes:
Marking unit, for all hacures on image to be labeled as [xl, xr, down];Wherein, hacures are horizontal straight Line, xl indicate that the left side coordinate of hacures, xr indicate that the right coordinate of hacures, down indicate the ordinate of hacures;
First removal unit, for carrying out the removal of first time hacures by first condition;Wherein, first condition are as follows: all width Hacures less than predetermined width threshold value cannot retain;
Computing unit obtains minimum vehicle width wt for calculating;
Second removal unit, for carrying out second of hacures removal by second condition;Wherein, second condition an are as follows: if yin The width xr-rl of hachure is less than minimum vehicle width wt, which cannot be retained;
Third removal unit, for carrying out the removal of third time hacures by third condition;Wherein, third condition are as follows: abscissa has Longest hacures can be retained in longitudinal 3 pixels up and down of intersection and intersection length/union length > 0.5.
9. embedded real-time vehicle detection system as claimed in claim 8, which is characterized in that difference block is specifically used for basis The hacures in image that shade wire module finally obtains indicate the midpoint abscissa of hacures with mid_x=(xl+xr)/2, The ordinate that vehicle upper end is indicated with ux=down- (xr-xl) * k1 finds left side difference in [xl, ux, mid_x, down] Maximum value row number xlmax, finds the right difference maximum value row number xrmax in [mid_x, ux, xr, down], and k1 indicates default Design factor.
10. embedded real-time vehicle detection system as claimed in claim 9, which is characterized in that rectangle frame module is specifically used for root Potential vehicle rectangle is obtained according to left side difference maximum value row number xlmax, the right difference maximum value row number xrmax and vehicle upper end u Frame;Wherein, u=down- (xr-xl) * k2, k2 indicate preset design factor, 0.9≤k2≤1.1.
CN201811177185.8A 2018-10-10 2018-10-10 Embedded real-time vehicle detection method and system Withdrawn CN109508637A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811177185.8A CN109508637A (en) 2018-10-10 2018-10-10 Embedded real-time vehicle detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811177185.8A CN109508637A (en) 2018-10-10 2018-10-10 Embedded real-time vehicle detection method and system

Publications (1)

Publication Number Publication Date
CN109508637A true CN109508637A (en) 2019-03-22

Family

ID=65746419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811177185.8A Withdrawn CN109508637A (en) 2018-10-10 2018-10-10 Embedded real-time vehicle detection method and system

Country Status (1)

Country Link
CN (1) CN109508637A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
CN105956632A (en) * 2016-05-20 2016-09-21 浙江宇视科技有限公司 Target detection method and device
CN106096531A (en) * 2016-05-31 2016-11-09 安徽省云力信息技术有限公司 A kind of traffic image polymorphic type vehicle checking method based on degree of depth study
CN106548135A (en) * 2016-10-17 2017-03-29 北海益生源农贸有限责任公司 A kind of road barrier detection method
CN107085696A (en) * 2016-10-15 2017-08-22 安徽百诚慧通科技有限公司 A kind of vehicle location and type identifier method based on bayonet socket image
CN107122734A (en) * 2017-04-25 2017-09-01 武汉理工大学 A kind of moving vehicle detection algorithm based on machine vision and machine learning
CN108229248A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 Vehicle checking method based on underbody shade
CN108256385A (en) * 2016-12-28 2018-07-06 南宁市浩发科技有限公司 The front vehicles detection method of view-based access control model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
CN105956632A (en) * 2016-05-20 2016-09-21 浙江宇视科技有限公司 Target detection method and device
CN106096531A (en) * 2016-05-31 2016-11-09 安徽省云力信息技术有限公司 A kind of traffic image polymorphic type vehicle checking method based on degree of depth study
CN107085696A (en) * 2016-10-15 2017-08-22 安徽百诚慧通科技有限公司 A kind of vehicle location and type identifier method based on bayonet socket image
CN106548135A (en) * 2016-10-17 2017-03-29 北海益生源农贸有限责任公司 A kind of road barrier detection method
CN108229248A (en) * 2016-12-14 2018-06-29 贵港市瑞成科技有限公司 Vehicle checking method based on underbody shade
CN108256385A (en) * 2016-12-28 2018-07-06 南宁市浩发科技有限公司 The front vehicles detection method of view-based access control model
CN107122734A (en) * 2017-04-25 2017-09-01 武汉理工大学 A kind of moving vehicle detection algorithm based on machine vision and machine learning

Similar Documents

Publication Publication Date Title
CN112949633B (en) An infrared target detection method based on improved YOLOv3
CN104778453B (en) A kind of night pedestrian detection method based on infrared pedestrian&#39;s brightness statistics feature
Pamula Road traffic conditions classification based on multilevel filtering of image content using convolutional neural networks
CN109508710A (en) Based on the unmanned vehicle night-environment cognitive method for improving YOLOv3 network
CN108399362A (en) A kind of rapid pedestrian detection method and device
KR20210078530A (en) Lane property detection method, device, electronic device and readable storage medium
CN104915642B (en) Front vehicles distance measuring method and device
CN112633149B (en) Domain-adaptive foggy-day image target detection method and device
CN105206109A (en) Infrared CCD based foggy day identifying early-warning system and method for vehicle
CN105354985A (en) Fatigue driving monitoring device and method
CN104700072A (en) Lane line historical frame recognition method
CN106951898B (en) Vehicle candidate area recommendation method and system and electronic equipment
DE102018008442A1 (en) Method for weather and / or visibility detection
CN108830319B (en) Image classification method and device
CN112131981A (en) Driver fatigue detection method based on skeleton data behavior recognition
CN114639115B (en) Human body key point and laser radar fused 3D pedestrian detection method
CN111553214A (en) Method and system for detecting smoking behavior of drivers
Rathore Lane detection for autonomous vehicles using OpenCV library
Mingzhou et al. Detection of highway lane lines and drivable regions based on dynamic image enhancement algorithm under unfavorable vision
CN108830131A (en) Traffic target detection and distance measuring method based on deep learning
CN110276318A (en) Nighttime highway rain recognition method, device, computer equipment and storage medium
CN113963272A (en) A UAV image target detection method based on improved yolov3
CN108256378A (en) Driver Fatigue Detection based on eyeball action recognition
Chen RETRACTED ARTICLE: Road vehicle recognition algorithm in safety assistant driving based on artificial intelligence
Cheng et al. A multi-feature fusion algorithm for driver fatigue detection based on a lightweight convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190322