CN108154129A - Method and system are determined based on the target area of vehicle vision system - Google Patents

Method and system are determined based on the target area of vehicle vision system Download PDF

Info

Publication number
CN108154129A
CN108154129A CN201711473798.1A CN201711473798A CN108154129A CN 108154129 A CN108154129 A CN 108154129A CN 201711473798 A CN201711473798 A CN 201711473798A CN 108154129 A CN108154129 A CN 108154129A
Authority
CN
China
Prior art keywords
super
pixel
pixel block
boundary
candidate frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711473798.1A
Other languages
Chinese (zh)
Inventor
苏帅
李寒松
乐国庆
张令川
许静
张立平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huahang Radio Measurement Research Institute
Original Assignee
Beijing Huahang Radio Measurement Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huahang Radio Measurement Research Institute filed Critical Beijing Huahang Radio Measurement Research Institute
Priority to CN201711473798.1A priority Critical patent/CN108154129A/en
Publication of CN108154129A publication Critical patent/CN108154129A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention relates to a kind of target areas based on vehicle vision system to determine method and system.This method includes:Obtain the image in front of garage;The image of acquisition is split to obtain super-pixel figure;Segmentation block relevant in super-pixel figure is merged;In all super-pixel block obtained after merging, the super-pixel block of area within a preset range is filtered out;Boundary candidate frame is set to the super-pixel block filtered out;The boundary candidate frame of the ratio of width to height within a preset range is filtered out from all boundary candidate frames;Each boundary candidate frame according to filtering out determines target area.The present invention is first split into the image of acquisition super-pixel block, and the super-pixel block of segmentation is merged and screened, and considerably reduces the quantity of pending super-pixel block;The boundary candidate frame of formation is screened again.The present invention greatly reduces the calculation amount of subsequent processing, the system that improves carries out the real-time that target area determines by the screening to super-pixel block and boundary candidate frame.

Description

Method and system are determined based on the target area of vehicle vision system
Technical field
The present invention relates to vehicle vision system technical field more particularly to a kind of target areas based on vehicle vision system Determine method and system.
Background technology
With the increase of China's car ownership, traffic congestion, traffic accident take place frequently, the problems such as causing environmental pollution of blocking up Become more prominent, coming into being for intelligent transportation system significantly improves these problems.Based on image detection and processing skill An important subsystem of the vehicle detecting system of art as intelligent transportation system, has been increasingly becoming current intelligent transportation system The hot spot studied both at home and abroad.Vehicle detection is automated driving system and the important component of DAS (Driver Assistant System).Automatic Pilot System obtains the information of other vehicles of vehicle-surroundings, so as to make by collection of the vehicle detecting system to present road information The various operations such as speed adjusting, spacing holding, doubling.The application of DAS (Driver Assistant System) can greatly reduce traffic accident Rate makes communications and transportation more efficient.
Vehicle detecting system absorbs urban highway and rural road generally by the video camera of installation on a vehicle Vehicle image in road, input computer carry out processing completion.Due to outdoor environment, there are the influences of Various Complex factor, it compares Target identification under general non-natural scene has more challenge.The Major Difficulties of vehicle detecting system have:The reality of vehicle detection Shi Xing, how to reduce the influence of illumination variation, change in size, partial occlusion to vehicle detection.
A large amount of research, non-patent literature 1 (Dollar P, Appel have been in field of vehicle detection domestic and foreign scholars R,Belongie S,et al.Fast Feature Pyramids for Object Detection[J].IEEE Transactions on Pattern Analysis&Machine Intelligence,2014,36(8):1532-45.) carry A kind of quick pyramid characteristic target detecting system is gone out.They are based on quick pyramid model extraction image gradient features and face Then entire image feature is input to AdaBoost graders and is detected by color characteristic.Due to needing to extract entire pyramid The feature of model image is matched, and calculation amount will be caused to increase, and does not simply fail to the requirement of real-time of guarantee system, while multiple More false drop rate is also resulted under heterocycle border, there are serious security risks.
Invention content
In view of above-mentioned analysis, the present invention is intended to provide a kind of target area based on vehicle vision system determine method and System, to solve the problems, such as that the computationally intensive caused real-time of existing method is poor.
The purpose of the present invention is mainly achieved through the following technical solutions:
On one side, the present invention proposes a kind of target area based on vehicle vision system and determines method, this method packet Include following steps:Obtain the image in front of garage;The described image of acquisition is split to obtain super-pixel figure;To super-pixel figure In it is relevant segmentation block merge;In all super-pixel block obtained after merging, area is filtered out within a preset range Super-pixel block;Boundary candidate frame is set to the super-pixel block filtered out;The ratio of width to height is filtered out pre- from all boundary candidate frames If the boundary candidate frame in range;Each boundary candidate frame according to filtering out determines target area.
Further, the above-mentioned target area based on vehicle vision system is determined in method, and the area that filters out is pre- If the preset range in super-pixel block step in range is:More than 1000 pixels and less than 150000 pixels.
Further, the above-mentioned target area based on vehicle vision system is determined in method, the described pair of super picture filtered out Plain block setting boundary candidate frame step further comprises:The super-pixel block that arbitrary selection one filters out, and by selected super-pixel block The longitudinal axis where the pixel of middle horizontal axis coordinate minimum is set as left side boundary line;It determines conterminal with selected super-pixel block Each neighbouring super pixels block and to the conterminal each related super-pixel block of each neighbouring super pixels block;By selected super-pixel block Upper border line is disposed as with the horizontal axis where the pixel of ordinate of orthogonal axes minimum in each neighbouring super pixels block;By selected super-pixel The longitudinal axis in block and each neighbouring super pixels block where the pixel of horizontal axis coordinate maximum is disposed as the right boundary line;It will be each related super The longitudinal axis in block of pixels where the pixel of horizontal axis coordinate maximum is also configured as the right boundary line;By the longitudinal axis in each neighbouring super pixels block Horizontal axis where the pixel of coordinate maximum is disposed as following boundary line;By the picture of ordinate of orthogonal axes maximum in each related super-pixel block Horizontal axis where vegetarian refreshments is also configured as following boundary line;Identified left side boundary line, upper border line, the right boundary line and following boundary line are enclosed It is set as multiple boundary candidate frames.
Further, the above-mentioned target area based on vehicle vision system is determined in method, described from all boundary candidates The ratio of width to height t is filtered out in the bounding box step of the ratio of width to height within a preset range in frame ranging from:0.8≤t≤1.5.
Further, the above-mentioned target area based on vehicle vision system determines in method that the basis filters out each Boundary candidate frame determines that target area step further comprises:It calculates in each boundary candidate frame per between a pair of neighbouring super pixels block Weights omega (i, j);Calculate the similarity ζ of corresponding candidate bounding box respectively according to the weights omega (i, j);Calculating sifting respectively The flood rate Ψ of each boundary candidate frame gone out and burst rate Ξ;According to similarity ζ, the flood rate of each boundary candidate frame filtered out Ψ and burst rate Ξ determines the score VPS of corresponding candidate bounding box respectively;Target is determined according to the score of each boundary candidate frame Region.
Further, the above-mentioned target area based on vehicle vision system determines in method that the basis filters out each Similarity ζ, the flood rate Ψ and burst rate Ξ of boundary candidate frame determine that the score VPS of corresponding candidate bounding box is further:In formula, bwIt is the width of boundary candidate frame, bhIt is the height of boundary candidate frame, ΨbFor candidate side The flood rate of boundary's frame, ΞbFor the burst rate of boundary candidate frame, ζbFor the similarity of boundary candidate frame, t and k are constant.
Further, the above-mentioned target area based on vehicle vision system is determined in method, and the calculating sifting respectively goes out The flood rate Ψ and burst rate Ξ of each boundary candidate frame be further:
In above formula,For the spilling area of i-th of super-pixel block in boundary candidate frame,What is represented is boundary candidate The burst area of frame, SboxFor the area of boundary candidate frame, N is the number of super-pixel block in boundary candidate frame.
The present invention super-pixel block is first split into the image of acquisition, and the super-pixel block of segmentation is merged and Screening considerably reduces the quantity of super-pixel block to be treated;In addition, the boundary candidate frame of formation is screened again. As can be seen that the present invention greatly reduces the calculation amount of subsequent processing by the screening to super-pixel block and boundary candidate frame, It improves system and carries out the real-time that target area determines, ensure that traffic safety.
On the other hand, the invention also provides a kind of target area based on vehicle vision system determines system.The system Including:Acquisition module, for obtaining the image in front of garage;Divide module, be split for the described image to acquisition To super-pixel figure;Merging module, for being merged to segmentation block relevant in super-pixel figure;Screening module, for from merging In all super-pixel block obtained afterwards, the super-pixel block of area within a preset range is filtered out;Bounding box setup module, for pair The super-pixel block setting boundary candidate frame filtered out;Bounding box screening module, for filtering out width from all boundary candidate frames Height is than boundary candidate frame within a preset range;Determining module, for determining target area according to each boundary candidate frame filtered out Domain.
Further, the above-mentioned target area based on vehicle vision system determines in system that the bounding box is set into one Step includes:Left side boundary line sets submodule, for the super-pixel block that arbitrary selection one filters out, and will be horizontal in selected super-pixel block The longitudinal axis where the pixel of axial coordinate minimum is set as left side boundary line;Submodule is selected, for determining and selected super-pixel block Conterminal each neighbouring super pixels block and to the conterminal each related super-pixel block of each neighbouring super pixels block;On Boundary line set submodule, for will in selected super-pixel block and each neighbouring super pixels block ordinate of orthogonal axes minimum pixel place Horizontal axis be disposed as upper border line;The right boundary line sets submodule, for by selected super-pixel block and each neighbouring super pixels block The longitudinal axis where the pixel of middle horizontal axis coordinate maximum is disposed as the right boundary line;By horizontal axis coordinate in each related super-pixel block most The longitudinal axis where big pixel is also configured as the right boundary line;Following boundary line setting submodule, for by each neighbouring super pixels block Horizontal axis where the pixel of middle ordinate of orthogonal axes maximum is disposed as following boundary line;By ordinate of orthogonal axes in each related super-pixel block most Horizontal axis where big pixel is also configured as following boundary line;Boundary candidate frame determination sub-module, for identified left margin Line, upper border line, the right boundary line and following boundary line, which are enclosed, is set as multiple boundary candidate frames.
Further, the above-mentioned target area based on vehicle vision system determines in system that the determining module is further Including:Weight calculation submodule, for calculate in each boundary candidate frame per between a pair of neighbouring super pixels block weights omega (i, j);Similarity calculation submodule, for calculating the similarity ζ of corresponding candidate bounding box respectively according to the weights omega (i, j);It overflows Extracting rate and burst rate computational submodule, the flood rate Ψ of each boundary candidate frame and burst rate Ξ gone out for calculating sifting respectively;Point Number computational submodule, for determining accordingly to wait according to each boundary candidate frame similarity ζ, flood rate Ψ and the burst rate Ξ that filter out Select the score of bounding box;Target area determination sub-module, for determining target area according to the score of each boundary candidate frame.
Since this target area determines that system is to determine the corresponding system of method with above-mentioned target area, so this system With technique effect same as mentioned above.
It in the present invention, can also be combined with each other between above-mentioned each technical solution, to realize more preferred assembled schemes.This Other feature and advantage of invention will illustrate in the following description, also, certain advantages can become from specification it is aobvious and It is clear to or is understood by implementing the present invention.The purpose of the present invention and other advantages can by write specification, right Specifically noted structure is realized and is obtained in claim and attached drawing.
Description of the drawings
Attached drawing is only used for showing the purpose of specific embodiment, and is not considered as limitation of the present invention, in entire attached drawing In, identical reference mark represents identical component.
Fig. 1 is the flow chart that the target area provided in an embodiment of the present invention based on vehicle vision system determines method;
Fig. 2 is by the image in front of the garage that is acquired in the embodiment of the present invention;
Fig. 3 is the super-pixel figure obtained after being split to the image in front of Tu2Zhong garages;
Fig. 4 is merges the super-pixel block in Fig. 3 in rear obtained super-pixel figure;
Fig. 5 is the schematic diagram that boundary candidate frame setting is carried out to the super-pixel figure in Fig. 4;
Fig. 6 is that area and burst area schematic diagram are overflowed in boundary candidate frame;
Fig. 7 is the structure diagram that the target area provided in an embodiment of the present invention based on vehicle vision system determines system.
Specific embodiment
The preferred embodiment of the present invention is specifically described below in conjunction with the accompanying drawings, wherein, attached drawing forms the application part, and Together with embodiments of the present invention for illustrating the principle of the present invention, it is not intended to limit the scope of the present invention.
Embodiment of the method:
Referring to Fig. 1, Fig. 1 is the stream that the target area provided in an embodiment of the present invention based on vehicle vision system determines method Cheng Tu.This method can detect the region that vehicle front contains vehicle, i.e. target area.As shown in the figure, this method is included such as Lower step:
Step S101 obtains the image in front of garage.Fig. 2 is shown to be absorbed by the vehicle-mounted camera being installed on vehicle Garage in front of image.
Step S102 is split the image of acquisition to obtain super-pixel figure.
Processing is split to image shown in Fig. 2, obtains super-pixel figure as shown in Figure 3.When it is implemented, it can adopt It is split with the algorithm based on figure, for example, normalized cut algorithm (Normalized Cuts (N-Cuts)), can also use Partitioning algorithm based on gradient, for example, quick conversion algorithm (Quick Shift), clustering algorithm (Mean Shift) etc., may be used also To cluster (Simple Linear Iterative Clustering, SLIC) using simple linear iteration.
Step S103 merges segmentation block relevant in super-pixel figure.
Super-pixel block merging is carried out to the super-pixel figure in Fig. 3, obtains the super-pixel figure after merging as shown in Figure 4.Tool When body is implemented, can (Density-based Spatial Clustering of be clustered by the iteration based on density Applications with Noise, DBSCAN) algorithm realizes.Figure 4, it is seen that by the step, it will be in Fig. 3 The segmentation block for belonging to same object is merged, for example, being a big super-pixel to all segmentation merged blocks for belonging to sky Block.
Step 104, in all super-pixel block obtained after merging, the super-pixel of area within a preset range is filtered out Block.
Since the elemental area of vehicle is generally in a preset range, it is possible to it is larger to get rid of region area in image With the smaller apparent region for being unlikely to be vehicle of region area.The larger region of region area can be road surface, sky, building Can be milestone etc. Deng, the smaller region of region area, the larger region of cut-out region area and the smaller area of region area Contacting between domain and surrounding super-pixel block.It, can be when it is implemented, for example, image for a width 1920*1080 pixels Area is more than 150000 pixels and area is less than the super-pixel block removal of 1000 pixels.
It should be noted that when it is implemented, the preset range can determine that the present embodiment is to it according to actual conditions Any restriction is not done.
Step 105, boundary candidate frame is set to the super-pixel block filtered out.
First, coordinate system is established to the super-pixel figure obtained in step S104, establishment of coordinate system mode is referring to Fig. 4 and Fig. 5.
Then, the super-pixel block arbitrarily filtered out in one step S104 of selection, and horizontal axis in selected super-pixel block is sat The longitudinal axis where marking minimum pixel is set as left side boundary line.Specifically, the super-pixel block 1 shown in Fig. 5 can be selected, is surpassed Longitudinal axis aa' where the pixel A of 1 horizontal axis coordinate minimum of block of pixels is set as left side boundary line.
On the right side of left side boundary line aa', determine with the conterminal each neighbouring super pixels block of selected super-pixel block and To the conterminal each related super-pixel block of each neighbouring super pixels block.Specifically, the super-pixel block packet adjacent with super-pixel block 1 It includes:Super-pixel block 2 and super-pixel block 3.The super-pixel block adjacent with super-pixel block 2 includes:Super-pixel block 4, surpasses super-pixel block 5 Block of pixels 6 and super-pixel block 7, the super-pixel block adjacent with super-pixel block 3 include:Super-pixel block 6.Then, super-pixel block 1 and super picture It is related super-pixel block between plain block 4, super-pixel block 5, super-pixel block 6 and super-pixel block 7.
Horizontal axis where the pixel of ordinate of orthogonal axes minimum in selected super-pixel block and each neighbouring super pixels block is respectively provided with For upper border line.Specifically, for super-pixel block 2, the point of ordinate minimum is B points, and the horizontal axis bb' where B points is set as Upper border line;For super-pixel block 3, the point of ordinate minimum is C points, and the horizontal axis cc' where C points is also configured as coboundary Line similarly, upper border line (not shown) is also configured as to the horizontal axis where the point of 1 ordinate of super-pixel block minimum.As it can be seen that The present embodiment is provided with three upper border lines altogether.
The longitudinal axis where the pixel of horizontal axis coordinate maximum in selected super-pixel block and each neighbouring super pixels block is respectively provided with For the right boundary line;By the longitudinal axis where with the pixel of horizontal axis coordinate maximum in the relevant each super-pixel block of selected super-pixel block It is set as the right boundary line.Specifically, the point of 2 abscissa of super-pixel block maximum is D points, and the longitudinal axis dd' where crossing D points is set as The right boundary line;The point of 3 abscissa of super-pixel block maximum is E points, and the longitudinal axis ee' where E points is also configured as the right boundary line.For The point of related 4 abscissa of super-pixel block maximum is F points, the longitudinal axis ff' for crossing F points is also configured as the right boundary line, similarly, by super picture Plain block 1, super-pixel block 5, super-pixel block 6 and super-pixel block 7 abscissa maximum point where the longitudinal axis be also configured as the right boundary line (not shown).As it can be seen that it is provided with seven the right boundary line altogether.
Horizontal axis where the pixel of ordinate of orthogonal axes maximum in each neighbouring super pixels block is disposed as following boundary line;It will be each Horizontal axis in related super-pixel block where the pixel of ordinate of orthogonal axes maximum is also configured as following boundary line.Specifically, super-pixel block 1 The point of ordinate maximum is G points, the horizontal axis gg' for crossing G points is set as following boundary line, similarly, by super-pixel block 2 to super-pixel block Horizontal axis where 7 ordinate maximum point is also configured as the right boundary line (not shown).As it can be seen that it is provided with seven right margins altogether Line
Identified left side boundary line, upper border line, the right boundary line and following boundary line, which are enclosed, is set as multiple boundary candidate frames.Specifically Ground by the setting of above-mentioned steps, can obtain left side boundary line, under three upper border lines, seven the right boundary lines and seven Boundary line, left side boundary line can enclose with boundary line on the right of any bar upper border line, any bar and the following boundary line of any bar and be set as one Boundary candidate frame, it is seen then that the boundary line of above-mentioned setting, which can enclose, is set as multiple boundary candidate frames.
According to the method described above, all super-pixel block are traversed as optional super-pixel block, all boundary candidate frames set.
Step 106, the boundary candidate frame of the ratio of width to height within a preset range is filtered out from all boundary candidate frames.
It is found by numerous studies, no matter vehicle is travelled with which kind of posture, and the wide high proportion t showed is in certain model In enclosing, by this proportional region, boundary candidate frame largely not in the range can be removed, when it is implemented, can Wide high proportion t range sets are existed:0.8≤t≤1.5.It should be noted that the width direction in the present embodiment is along X The direction of axis, short transverse are the direction along Y-axis.
Step 107, target area is determined according to each boundary candidate frame filtered out.
The score of each boundary candidate frame is calculated respectively, and circular is described in detail below.
First, it optional one in the boundary candidate frame filtered out from calculating step 106, calculates in the boundary candidate frame Per the weights omega (i, j) between a pair of neighbouring super pixels block.
Specifically, graph theory model is established, each super-pixel block in the boundary candidate frame is as a node, super-pixel Relationship between block is as nonoriented edge.Since each node is the super-pixel block that is generated by partitioning algorithm, so each node Not only be connected with conterminal adjacent node, but also and be indirectly connected with the conterminal node of adjacent node, This, referred to as interdependent node.It is defined as per the weight between a pair of connected node i and node j:
In above formula,WithRepresent average value in certain color space of node i and node j, σ weight intensity in order to control Constant.Node i and node j include the adjacent node being connected directly, and also include the interdependent node being indirectly connected.Color in above formula Space can be RGB, LUV etc..
Then, the similarity ζ of the boundary candidate frame is calculated according to formula 2 according to weights omega (i, j).
In above formula, N represents the number of super-pixel block in the boundary candidate frame, and ω (i, j) is between super-pixel block i and j Weight.It is 0 for the weights omega (i, j) between the super-pixel block of no connection relation (including being directly connected to and being indirectly connected with).
The flood rate Ψ for each boundary candidate frame that calculating sifting goes out and burst rate Ξ respectively again.
Referring to Fig. 6, for boundary candidate frame shown in figure (Bounding box), the center of gravity there are six super-pixel block exists In the boundary candidate frame.Wherein, super-pixel block 2,4,5 and 6 overflows bounding box, and the area for overflowing boundary frame portion point is referred to as spilling plane Product (Overflow area).That is, super-pixel block of the center of gravity in boundary candidate frame is placed in except the boundary candidate frame Part, to overflow area.In addition to this, the region area for this six super-pixel block being not belonging in the boundary candidate frame is referred to as burst Area (Missing area).
The flood rate Ψ of boundary candidate frame and burst rate Ξ is calculated by formula 3 and formula 4
In above formula,Represent the spilling area of i-th of super-pixel block in the boundary candidate frame,Represent the candidate side The burst area of boundary's frame, SboxRepresent the area of the boundary candidate frame, N represents the number of super-pixel block in the boundary candidate frame.
Finally, which is determined according to the similarity ζ of the boundary candidate frame filtered out, flood rate Ψ and burst rate Ξ The score VPS of boundary's frame.The score is specially that for vehicle there may be score, which has reacted the candidate side in the boundary candidate frame There are the confidence level VPS of vehicle in boundary's frame.When it is implemented, score VPS can be calculated according to the following formula:
In above formula, bwIt is the width of the boundary candidate frame, bhIt is the height of the boundary candidate frame, Ψ is overflowing for the boundary candidate frame Extracting rate, Ξ are the burst rate of the boundary candidate frame, and ζ is the similarity of the boundary candidate frame, and t and k are constant.When it is implemented, t It can take 2, k that can take 0.5.
According to the method described above, there may be scores for the vehicle of each boundary candidate frame that calculating sifting goes out respectively, then utilize Non-maxima suppression (Non-Maximum Suppression, NMS) algorithm filters out the target area intentionally got.
In the embodiment of the present invention, super-pixel block is first split into the image of acquisition, and to the super-pixel block after segmentation It merges and screens, considerably reduce the quantity of super-pixel block to be treated;In addition, again to the boundary candidate frame of formation Further screening.As can be seen that the embodiment of the present invention by the screening to super-pixel block and boundary candidate frame, greatly reduces The calculation amount of subsequent processing, the system that improves carry out the real-time that target area determines, ensure that traffic safety.
System embodiment:
Referring to Fig. 7, Fig. 7 is the structure that the target area based on vehicle vision system that inventive embodiments provide determines system Block diagram.As shown in the figure, the system includes:Acquisition module 701, for obtaining the image in front of garage;Divide module 702, be used for The described image of acquisition is split to obtain super-pixel figure;702 merging modules, for segmentation block relevant in super-pixel figure It merges;704 screening modules, in all super-pixel block for being obtained after merging, filtering out area within a preset range Super-pixel block;Bounding box setup module 705, for setting boundary candidate frame to the super-pixel block filtered out;706 bounding boxes sieve Modeling block, for filtering out the boundary candidate frame of the ratio of width to height within a preset range from all boundary candidate frames;Determining module 707, for determining target area according to each boundary candidate frame filtered out.
In above-described embodiment, bounding box setup module 705 further comprises:Left side boundary line sets submodule, for arbitrary The super-pixel block that selection one filters out, and the longitudinal axis where the pixel of horizontal axis coordinate minimum in selected super-pixel block is set as Left side boundary line;Select submodule, for determine with the conterminal each neighbouring super pixels block of selected super-pixel block and with it is each The conterminal each related super-pixel block of neighbouring super pixels block;Upper border line sets submodule, for by selected super-pixel block Upper border line is disposed as with the horizontal axis where the pixel of ordinate of orthogonal axes minimum in each neighbouring super pixels block;The right boundary line is set Submodule, for the longitudinal axis where the pixel of horizontal axis coordinate maximum in selected super-pixel block and each neighbouring super pixels block to be all provided with It is set to the right boundary line;The longitudinal axis where the pixel of horizontal axis coordinate maximum in each related super-pixel block is also configured as right margin Line;Following boundary line setting submodule, for the horizontal axis where the pixel of ordinate of orthogonal axes maximum in each neighbouring super pixels block is equal It is set as following boundary line;Horizontal axis where the pixel of ordinate of orthogonal axes maximum in each related super-pixel block is also configured as lower boundary Line;Boundary candidate frame determination sub-module is enclosed and is set for identified left side boundary line, upper border line, the right boundary line and following boundary line Into multiple boundary candidate frames.
In above-described embodiment, determining module 707 further comprises:Weight calculation submodule, for calculating each boundary candidate Per the weights omega (i, j) between a pair of neighbouring super pixels block in frame;Similarity calculation submodule, for according to the weights omega (i, j) calculates the similarity ζ of corresponding candidate bounding box respectively;Flood rate and burst rate computational submodule, for calculating sieve respectively The flood rate Ψ for each boundary candidate frame selected and burst rate Ξ;Score computational submodule, for according to each candidate side filtered out Boundary frame similarity ζ, flood rate Ψ and burst rate Ξ determine the score of corresponding candidate bounding box;Target area determination sub-module, is used for Target area is determined according to the score of each boundary candidate frame.
The specific implementation process of this system is referring to above method embodiment, and details are not described herein for the present embodiment.
Since this target area determines that system is to determine the corresponding system of method with above-mentioned target area, so this system With technique effect same as mentioned above.
It will be understood by those skilled in the art that realizing all or part of flow of above-described embodiment method, meter can be passed through Calculation machine program is completed to instruct relevant hardware, and the program can be stored in computer readable storage medium.Wherein, institute Computer readable storage medium is stated as disk, CD, read-only memory or random access memory etc..
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited thereto, Any one skilled in the art in the technical scope disclosed by the present invention, the change or replacement that can be readily occurred in, It should be covered by the protection scope of the present invention.

Claims (10)

1. a kind of target area based on vehicle vision system determines method, which is characterized in that includes the following steps:
Obtain the image in front of garage;
The described image of acquisition is split to obtain super-pixel figure;
Segmentation block relevant in super-pixel figure is merged;
In all super-pixel block obtained after merging, the super-pixel block of area within a preset range is filtered out;
Boundary candidate frame is set to the super-pixel block filtered out;
The boundary candidate frame of the ratio of width to height within a preset range is filtered out from all boundary candidate frames;
Each boundary candidate frame according to filtering out determines target area.
2. the target area according to claim 1 based on vehicle vision system determines method, which is characterized in that the sieve The preset range selected in the super-pixel block step of area within a preset range is:More than 1000 pixels and less than 150000 pictures Element.
3. the target area according to claim 1 based on vehicle vision system determines method, which is characterized in that described right The super-pixel block setting boundary candidate frame step filtered out further comprises:
The super-pixel block that arbitrary selection one filters out, and will be vertical where the pixel of horizontal axis coordinate minimum in selected super-pixel block Axis is set as left side boundary line;
It determines and the conterminal each neighbouring super pixels block of selected super-pixel block and has common edge with each neighbouring super pixels block Each related super-pixel block on boundary;
Horizontal axis where the pixel of ordinate of orthogonal axes minimum in selected super-pixel block and each neighbouring super pixels block is disposed as Boundary line;
The longitudinal axis where the pixel of horizontal axis coordinate maximum in selected super-pixel block and each neighbouring super pixels block is disposed as the right side Boundary line;The longitudinal axis where the pixel of horizontal axis coordinate maximum in each related super-pixel block is also configured as the right boundary line;
Horizontal axis where the pixel of ordinate of orthogonal axes maximum in each neighbouring super pixels block is disposed as following boundary line;By each correlation Horizontal axis in super-pixel block where the pixel of ordinate of orthogonal axes maximum is also configured as following boundary line;
Identified left side boundary line, upper border line, the right boundary line and following boundary line, which are enclosed, is set as multiple boundary candidate frames.
4. the target area according to claim 3 based on vehicle vision system determines method, which is characterized in that it is described from The ratio of width to height t is filtered out in the bounding box step of the ratio of width to height within a preset range in all boundary candidate frames ranging from:0.8≤t ≤1.5。
5. the target area according to claim 4 based on vehicle vision system determines method, which is characterized in that described Determine that target area step further comprises according to each boundary candidate frame filtered out:
It calculates in each boundary candidate frame per the weights omega (i, j) between a pair of neighbouring super pixels block;
Calculate the similarity ζ of corresponding candidate bounding box respectively according to the weights omega (i, j);
The flood rate Ψ for each boundary candidate frame that calculating sifting goes out and burst rate Ξ respectively;
Corresponding candidate bounding box is determined according to the similarity ζ of each boundary candidate frame, the flood rate Ψ and burst rate Ξ filtered out respectively Score VPS;
Target area is determined according to the score of each boundary candidate frame.
6. the target area according to claim 5 based on vehicle vision system determines method, which is characterized in that described The score VPS of corresponding candidate bounding box is determined according to the similarity ζ of each boundary candidate frame, the flood rate Ψ and burst rate Ξ that filter out It is further:
In above formula, bwIt is the width of boundary candidate frame, bhIt is the height of boundary candidate frame, ΨbFor the flood rate of boundary candidate frame, ΞbTo wait Select the burst rate of bounding box, ζbFor the similarity of boundary candidate frame, t and k are constant.
7. the target area according to claim 5 based on vehicle vision system determines method, which is characterized in that described point The flood rate Ψ and burst rate Ξ for each boundary candidate frame that other calculating sifting goes out be further:
In above formula, OSiFor the spilling area of i-th of super-pixel block in boundary candidate frame,Represent the burst of boundary candidate frame Area, SboxFor the area of boundary candidate frame, N is the number of super-pixel block in boundary candidate frame.
8. a kind of target area based on vehicle vision system determines system, which is characterized in that including:
Acquisition module, for obtaining the image in front of garage;
Divide module, be split to obtain super-pixel figure for the described image to acquisition;
Merging module, for being merged to segmentation block relevant in super-pixel figure;
Screening module, in all super-pixel block for being obtained after merging, filtering out the super-pixel of area within a preset range Block;
Bounding box setup module, for setting boundary candidate frame to the super-pixel block filtered out;
Bounding box screening module, for filtering out the boundary candidate of the ratio of width to height within a preset range from all boundary candidate frames Frame;
Determining module, for determining target area according to each boundary candidate frame filtered out.
9. the target area according to claim 8 based on vehicle vision system determines system, which is characterized in that the side Boundary's frame setting further comprises:
Left side boundary line sets submodule, for arbitrary one super-pixel block that filters out of selection, and by horizontal axis in selected super-pixel block The longitudinal axis where the pixel of coordinate minimum is set as left side boundary line;
Select submodule, for determine with the conterminal each neighbouring super pixels block of selected super-pixel block and with it is each adjacent The conterminal each related super-pixel block of super-pixel block;
Upper border line sets submodule, for by the pixel of ordinate of orthogonal axes minimum in selected super-pixel block and each neighbouring super pixels block Horizontal axis where point is disposed as upper border line;
The right boundary line sets submodule, for by the pixel of horizontal axis coordinate maximum in selected super-pixel block and each neighbouring super pixels block The longitudinal axis where point is disposed as the right boundary line;By the longitudinal axis where the pixel of horizontal axis coordinate maximum in each related super-pixel block It is also configured as the right boundary line;
Following boundary line setting submodule, for the horizontal axis where the pixel of ordinate of orthogonal axes maximum in each neighbouring super pixels block is equal It is set as following boundary line;Horizontal axis where the pixel of ordinate of orthogonal axes maximum in each related super-pixel block is also configured as lower boundary Line;
Boundary candidate frame determination sub-module is enclosed and is set for identified left side boundary line, upper border line, the right boundary line and following boundary line Into multiple boundary candidate frames.
10. the target area according to claim 8 based on vehicle vision system determines system, which is characterized in that described Determining module further comprises:
Weight calculation submodule, for calculating in each boundary candidate frame per the weights omega (i, j) between a pair of neighbouring super pixels block;
Similarity calculation submodule, for calculating the similarity ζ of corresponding candidate bounding box respectively according to the weights omega (i, j);
Flood rate and burst rate computational submodule, for the flood rate Ψ and burst of each boundary candidate frame that calculating sifting respectively goes out Rate Ξ;
Score computational submodule, for being determined according to each boundary candidate frame similarity ζ, flood rate Ψ and the burst rate Ξ that filter out The score of corresponding candidate bounding box;
Target area determination sub-module, for determining target area according to the score of each boundary candidate frame.
CN201711473798.1A 2017-12-29 2017-12-29 Method and system are determined based on the target area of vehicle vision system Pending CN108154129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711473798.1A CN108154129A (en) 2017-12-29 2017-12-29 Method and system are determined based on the target area of vehicle vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711473798.1A CN108154129A (en) 2017-12-29 2017-12-29 Method and system are determined based on the target area of vehicle vision system

Publications (1)

Publication Number Publication Date
CN108154129A true CN108154129A (en) 2018-06-12

Family

ID=62462379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711473798.1A Pending CN108154129A (en) 2017-12-29 2017-12-29 Method and system are determined based on the target area of vehicle vision system

Country Status (1)

Country Link
CN (1) CN108154129A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829517A (en) * 2019-03-07 2019-05-31 成都医云科技有限公司 Target detection De-weight method and device
CN112989872A (en) * 2019-12-12 2021-06-18 华为技术有限公司 Target detection method and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989334A (en) * 2015-02-12 2016-10-05 中国科学院西安光学精密机械研究所 Road detection method based on monocular vision
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989334A (en) * 2015-02-12 2016-10-05 中国科学院西安光学精密机械研究所 Road detection method based on monocular vision
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
苏帅 等: "基于图论的复杂交通环境下车辆检测方法", 《北京交通大学学报》 *
苏帅: "基于多视角分类器融合的车辆检测研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829517A (en) * 2019-03-07 2019-05-31 成都医云科技有限公司 Target detection De-weight method and device
CN112989872A (en) * 2019-12-12 2021-06-18 华为技术有限公司 Target detection method and related device
CN112989872B (en) * 2019-12-12 2024-05-07 华为技术有限公司 Target detection method and related device

Similar Documents

Publication Publication Date Title
CN104008645B (en) One is applicable to the prediction of urban road lane line and method for early warning
CN105260713B (en) A kind of method for detecting lane lines and device
CN101334837B (en) Multi-method integrated license plate image positioning method
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN105206109B (en) A kind of vehicle greasy weather identification early warning system and method based on infrared CCD
CN103942560B (en) A kind of high-resolution video vehicle checking method in intelligent traffic monitoring system
CN105718872B (en) Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle
CN103065138A (en) Recognition method of license plate number of motor vehicle
CN103679205B (en) Assume based on shade and the Foregut fermenters method of layering HOG symmetrical feature checking
CN113902729A (en) Road surface pothole detection method based on YOLO v5 model
CN102968646A (en) Plate number detecting method based on machine learning
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN109190483B (en) Lane line detection method based on vision
CN110491132A (en) Vehicle based on video frame picture analyzing, which is disobeyed, stops detection method and device
EP2813973B1 (en) Method and system for processing video image
CN110443142B (en) Deep learning vehicle counting method based on road surface extraction and segmentation
CN101369312B (en) Method and equipment for detecting intersection in image
CN103021179A (en) Real-time monitoring video based safety belt detection method
CN108154129A (en) Method and system are determined based on the target area of vehicle vision system
CN107944392A (en) A kind of effective ways suitable for cell bayonet Dense crowd monitor video target mark
CN107464245A (en) A kind of localization method and device at picture structure edge
CN102169583B (en) Vehicle shielding detection and segmentation method based on vehicle window positioning
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation
CN108460348A (en) Road target detection method based on threedimensional model
CN104252707B (en) Method for checking object and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180612