CN106951898B - Vehicle candidate area recommendation method and system and electronic equipment - Google Patents

Vehicle candidate area recommendation method and system and electronic equipment Download PDF

Info

Publication number
CN106951898B
CN106951898B CN201710153509.3A CN201710153509A CN106951898B CN 106951898 B CN106951898 B CN 106951898B CN 201710153509 A CN201710153509 A CN 201710153509A CN 106951898 B CN106951898 B CN 106951898B
Authority
CN
China
Prior art keywords
image
pixel
sum
pixel point
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710153509.3A
Other languages
Chinese (zh)
Other versions
CN106951898A (en
Inventor
吴子章
王凡
唐锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zongmu Technology Shanghai Co Ltd
Original Assignee
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zongmu Technology Shanghai Co Ltd filed Critical Zongmu Technology Shanghai Co Ltd
Priority to CN201710153509.3A priority Critical patent/CN106951898B/en
Publication of CN106951898A publication Critical patent/CN106951898A/en
Application granted granted Critical
Publication of CN106951898B publication Critical patent/CN106951898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention provides a vehicle candidate region recommendation method and system and electronic equipment, wherein the vehicle candidate region recommendation method comprises the steps of preprocessing an input image based on significance analysis to obtain a target image containing a vehicle image; carrying out binarization processing on the target image to obtain a binarized image; and carrying out gradient-based boundary merging on the binary image to obtain a recommended vehicle candidate region. The vehicle candidate region recommendation method and system and the electronic device can more accurately position the vehicle candidate region.

Description

Vehicle candidate area recommendation method and system and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a vehicle candidate area recommendation method and system and electronic equipment.
Background
Vehicle detection technology is an important component of intelligent transportation systems. The traffic intelligent management needs to acquire objective and effective road traffic information through a vehicle detection technology to obtain basic data such as traffic flow, vehicle speed, road occupancy, vehicle distance, vehicle type and the like, so that intelligent means such as monitoring, control, analysis, decision, scheduling, dispersion and the like are realized purposefully.
Specifically, the vehicle detection technology refers to a process of searching and determining a vehicle in an image by using an image sensing means to obtain various attributes (such as position, speed, shape, and appearance) of the vehicle in the image. The method belongs to the field of automobile active safety, and particularly relates to one of key technologies for realizing rear-end collision early warning and automatic emergency braking functions. At present, vehicle detection technology is widely applied to intelligent transportation Systems and Advanced Driver Assistance Systems (ADAS). In an intelligent traffic system in a city, video sensors are installed at a plurality of traffic checkpoints, and thousands of video data can be generated every day. The urban traffic has high traffic density and serious traffic jam, users on each road have diversity, and vehicles are detected and obtained from the complex background of the urban traffic, so that the urban traffic and the urban public safety are vital. In an advanced driving assistance system, a vehicle detection technology is mainly used in a Forward Collision Warning system (FCW), and the distance, the direction, and the relative speed between a host vehicle and a preceding vehicle are determined by the vehicle detection technology, and a driver is warned when there is a potential Collision risk.
At present, the mainstream vehicle detection method is a method of screening candidate regions first and then performing further accurate confirmation by using a machine learning means or a deep learning means to obtain the position information of the vehicle in the image. Generally, the candidate region is screened by a candidate region recommendation method. Thus, the more accurately the method recommended for the candidate region is located, the greater the classifier will be aided. Because the more accurate the positioning, the higher the score returned by the classifier, the false alarm can be suppressed to some extent.
In the prior art, methods for recommending candidate regions can be roughly divided into the following two categories:
(1)grouping method
specifically, a method of breaking up the pictures and then polymerizing them. Such as a selective search algorithm.
(2)window scoring method。
Specifically, a method of generating a large amount of windows and scoring, and then filtering out low scores. Such as the objectness algorithm.
(3) A method between the two algorithms mentioned above, such as a multibox.
However, the existing candidate area recommendation method has high requirements on weather conditions, road conditions and background environments, accurate positioning is difficult to obtain, and the method is difficult to uniformly apply to different types of vehicles and is easy to generate false alarms. Therefore, how to more accurately locate the candidate area of the vehicle becomes a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention aims to provide a vehicle candidate region recommendation method, system and electronic device, which obtains a candidate region including a target vehicle after screening by preprocessing an input image based on a saliency analysis; and then carrying out gradient-based boundary merging on the candidate target area containing the target vehicle to obtain a candidate recommended area, thereby realizing the accurate positioning of the vehicle candidate area.
To achieve the above and other related objects, the present invention provides a vehicle candidate region recommendation method, including the steps of: preprocessing an input image based on significance analysis to obtain a target image containing a vehicle image; carrying out binarization processing on the target image to obtain a binarized image; and carrying out gradient-based boundary merging on the binary image to obtain a recommended vehicle candidate region.
In an embodiment of the invention, the input image is sampled Y-channel image information.
In an embodiment of the present invention, the preprocessing the input image based on the saliency analysis to obtain the target image including the vehicle image includes the following steps:
traversing the input image to obtain the pixel value of each pixel in the input image;
calculating the sum of the distances from each pixel point to other pixel points, and recording the sum of the maximum distances and the sum of the minimum distances;
calculating the difference value of the maximum distance sum and the minimum distance sum, and carrying out normalization processing on the distance sum corresponding to each pixel point according to the difference value;
normalizing the input image to obtain a stretched image; the variation range of the pixel value of each pixel point after the normalization processing of the input image is the same as the variation range after the normalization processing of the sum of each distance;
calculating the significance characteristic value of each pixel point based on the sum of the pixel value and the distance of each pixel point in the stretched image to obtain a significance analysis image;
and subtracting the stretched image from the significance analysis image to obtain a target image.
In an embodiment of the present invention, the binarization processing is performed on the target image by the following steps:
subtracting the sum of the pixels of the pixel points on the left side in the horizontal direction from the sum of the pixels of the pixel points on the left side in the horizontal direction on the target image to obtain a first difference value;
subtracting the sum of the pixels of the upper pixel point and the lower pixel point in the vertical direction from the sum of the pixels of each pixel point and the two pixel points in the upper and lower vertical directions on the target image to obtain a second difference value;
when at least one of the first difference value and the second difference value is larger than a preset threshold value, setting the pixel value of the corresponding pixel point to be 1; otherwise, setting the pixel value of the corresponding pixel point to be 0.
In an embodiment of the present invention, the gradient-based boundary merging on the binarized image to obtain the recommended vehicle candidate region includes the following steps:
recording line segments of each horizontal direction in the binary image;
combining adjacent or alternate horizontal line segments until the line segments are combined into one line segment; the merging refers to moving the upper horizontal line segment downwards to the lower horizontal line segment, and overlapping the lengths of the two line segments;
and selecting a square area with the bottom edge candidate line as the bottom edge and the length of the bottom edge candidate line as the side length as the recommended vehicle candidate area.
Meanwhile, the invention provides a vehicle candidate region recommendation system which comprises a significance analysis module, a binarization module and a boundary merging module;
the saliency analysis module is used for preprocessing an input image based on saliency analysis to obtain a target image containing a vehicle image;
the binarization processing module is used for carrying out binarization processing on the target image to obtain a binarization image;
and the boundary merging module is used for performing gradient-based boundary merging on the binary image to obtain a recommended vehicle candidate region.
In an embodiment of the invention, the input image is sampled Y-channel image information.
In an embodiment of the present invention, the significance analysis module performs the following operations:
traversing the input image to obtain the pixel value of each pixel in the input image;
calculating the sum of the distances from each pixel point to other pixel points, and recording the sum of the maximum distances and the sum of the minimum distances;
calculating the difference value of the maximum distance sum and the minimum distance sum, and carrying out normalization processing on the distance sum corresponding to each pixel point according to the difference value;
normalizing the input image to obtain a stretched image; the variation range of the pixel value of each pixel point after the normalization processing of the input image is the same as the variation range after the normalization processing of the sum of each distance;
calculating the significance characteristic value of each pixel point based on the sum of the pixel value and the distance of each pixel point in the stretched image to obtain a significance analysis image;
and subtracting the stretched image from the significance analysis image to obtain a target image.
In an embodiment of the present invention, the binarization processing module performs the following operations:
subtracting the sum of the pixels of the pixel points on the left side in the horizontal direction from the sum of the pixels of the pixel points on the left side in the horizontal direction on the target image to obtain a first difference value;
subtracting the sum of the pixels of the upper pixel point and the lower pixel point in the vertical direction from the sum of the pixels of each pixel point and the two pixel points in the upper and lower vertical directions on the target image to obtain a second difference value;
when at least one of the first difference value and the second difference value is larger than a preset threshold value, setting the pixel value of the corresponding pixel point to be 1; otherwise, setting the pixel value of the corresponding pixel point to be 0.
In an embodiment of the present invention, the boundary merging module performs the following operations:
recording line segments of each horizontal direction in the binary image;
combining adjacent or alternate horizontal line segments until the line segments are combined into one line segment; the merging refers to moving the upper horizontal line segment downwards to the lower horizontal line segment, and overlapping the lengths of the two line segments;
and selecting a square area with the bottom edge candidate line as the bottom edge and the length of the bottom edge candidate line as the side length as the recommended vehicle candidate area.
In addition, the invention also provides electronic equipment comprising any one of the vehicle candidate region recommendation systems.
As described above, the vehicle candidate region recommendation method, system, and electronic device according to the present invention have the following advantageous effects:
(1) obtaining a target image with relatively rich information by using significance analysis, and performing boundary enhancement with tendentiousness in the horizontal and vertical directions by solving unequal gradients in the horizontal and vertical directions so as to provide tendentiousness edge materials for subsequently determining candidate base edges;
(2) and screening candidate bottom edges by using the edge-enhanced binary image, and further enhancing the characteristic range of the bottom edge candidate region by using a line dropping mechanism, so that the vehicle candidate region can be more accurately positioned.
Drawings
FIG. 1 is a flow chart of a vehicle candidate area recommendation method of the present invention;
FIG. 2 is a flow chart illustrating the preprocessing of an input image based on saliency analysis according to the present invention;
FIG. 3 is a flow chart of the image binarization process of the present invention;
FIG. 4 is a flow chart illustrating gradient-based boundary merging for a binarized target image according to the present invention;
FIG. 5 is a schematic diagram of segment merging according to the present invention;
FIG. 6 is a schematic diagram of a vehicle candidate area recommendation system according to the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to the present invention.
Description of the element reference numerals
1 significance analysis Module
2 binarization module
3 boundary merging module
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention.
It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The vehicle candidate region recommendation method and system and the electronic device can more accurately determine the vehicle candidate region in the image based on the method of combining the significance analysis and the gradient boundary so as to send the vehicle candidate region to the classifier for more accurate judgment, thereby identifying the target vehicle image and improving the accuracy of vehicle detection.
Referring to fig. 1, a vehicle candidate region recommendation method of the present invention includes the steps of:
step S1, preprocessing the input image based on the saliency analysis to obtain a target image including a vehicle image.
In the invention, an input image is acquired by image acquisition equipment and adopts a YUV format. Among them, YUV is a color coding method adopted by the european television system, in which "Y" represents brightness (Luma), i.e., a gray scale value; "U" and "V" denote Chroma (Chroma) and are used to describe the color and saturation of an image for specifying the color of a pixel. That is, Y-channel image information represents luminance information of an image, U-channel image information and V-channel image information represents color information of an image, and a color picture can be formed by combining the Y-channel image information, the U-channel image information and the V-channel image information. In the present invention, the input image is Y-channel image information subjected to sampling processing.
As shown in fig. 2, preprocessing the input image based on the saliency analysis to obtain the target image including the vehicle image includes the following steps:
step S11, traversing the input image, acquiring the pixel value of each pixel in the input image, and recording the maximum pixel value and the minimum pixel value.
And step S12, calculating the sum of the distances from each pixel point to other pixel points, and recording the sum of the maximum distances and the sum of the minimum distances.
Preferably, the distance from each pixel point to each other pixel point in the input image is a euclidean distance, but is not limited to the euclidean distance. Euclidean distance is a commonly used definition of distance, referring to the true distance between two points in m-dimensional space, or the natural length of a vector (i.e., the distance of the point from the origin). The euclidean distance in two and three dimensions is the actual distance between two points.
In the invention, the sum of the distances from each pixel point to other pixel points is used as a measure for measuring the contrast of the pixel point.
Step S13, calculating the difference between the maximum distance sum and the minimum distance sum, and normalizing the distance sum corresponding to each pixel point according to the difference.
Specifically, the distance sum corresponding to each pixel point is linearly mapped to the range of 0-255 according to the difference value between the maximum distance sum and the minimum distance sum.
And after normalizing the sum of the distances corresponding to each pixel point according to the difference value of the maximum distance sum and the minimum distance sum, expanding the data obtained after normalization line by line to obtain an array for expressing the efficiency of each pixel value. That is, the data obtained by normalizing the rows are sequentially spliced together in the order from top to bottom to obtain an array with the size of 256. The indices of the array are 0-255.
Step S14, normalizing the input image to obtain a stretched image; the variation range of the pixel value of each pixel point after the normalization processing of the input image is the same as the variation range after the normalization processing of the sum of each distance.
Specifically, when the input image is subjected to normalization processing, the pixel values of the pixel points are linearly mapped to a range of 0-255, so that a stretched image is obtained.
And step S15, calculating the significance characteristic value of each pixel point based on the sum of the pixel value and the distance of each pixel point in the stretched image to obtain a significance analysis image.
Specifically, the pixel value of each pixel point in the stretched image is used as an index of an array for representing efficiency of each pixel value, and a corresponding value is searched from the array for representing efficiency of each pixel value and is used as a significant characteristic value of each pixel point. The significance characteristic value of each pixel point forms a significance analysis image.
And step S16, subtracting the stretched image from the saliency analysis image to obtain a target image.
The saliency analysis image and the stretched image are in the same value range space and have a mapping relationship. Moreover, the saliency analysis image enhances a region of the input image where the contrast is relatively strong, whereas the stretched image is an approximately balanced image. And obtaining the difference value between the saliency analysis image and the stretched image to obtain the target image. The target image eliminates the relatively non-enhanced area in the input image, the area with enhanced contrast, namely the area with enhanced contrast and including the vehicle image, is reserved, and other background areas are eliminated, so that the target image with prominent objects, namely the vehicle image, is obtained.
And step S2, performing binarization processing on the target image to obtain a binarized image.
In order to make the contrast of the target image stronger, binarization processing is performed on the target image. The binary image has obvious black and white effect, and is more convenient for positioning the vehicle candidate area.
Specifically, as shown in fig. 3, the target image is subjected to binarization processing by:
step S21, the sum of the pixel sum of each pixel point on the target image and the left pixel point in the horizontal direction is subtracted from the sum of the pixel sum of the right pixel point in the horizontal direction, so as to obtain a first difference.
It should be noted that, when the sum of the pixels of the two pixels exceeds 255, the sum of the pixels is corrected to 255.
And step S22, subtracting the sum of the pixels of the upper and lower pixel points in the vertical direction from the sum of the pixels of each pixel point in the target image and the two pixel points in the upper and lower vertical directions to obtain a second difference value.
It should be noted that, when the sum of the pixels of the three pixels exceeds 255, the sum of the pixels is corrected to 255.
Step S23, when at least one of the first difference value and the second difference value is larger than a preset threshold value, setting the pixel value of the corresponding pixel point to be 1; otherwise, setting the pixel value of the corresponding pixel point to be 0.
Thus, a binary image can be obtained.
And step S3, carrying out boundary merging based on gradient on the binary image to obtain a recommended vehicle candidate region.
As shown in fig. 4, step S3 includes the steps of:
and step S31, recording the line segment of each horizontal direction in the binary image.
Specifically, horizontal line segments are acquired from the binarized image, and the number of lines of each horizontal line segment in the binarized image in the vertical direction, and a start point and an end point in the horizontal direction are recorded.
Step S32, merging the adjacent or alternate horizontal line segments until the line segments are merged into one line segment; merging refers to moving the upper horizontal line segment downward to the lower horizontal line segment, and superimposing the lengths of the two line segments.
Specifically, in the vertical direction, the relationship between adjacent straight line segments is searched from low to high. And moving the straight line segments adjacent to the upper side by one or more lines downwards to the straight line segments below, so that the two straight line segments are possibly coincident. When the upper straight line segment and the lower straight line segment are combined, the length is the superposition of the lengths of the two straight line segments after movement. In addition, the number of lines between the upper and lower straight line segments is determined according to specific conditions.
For example, as shown in fig. 5, segment 2 and segment 3 are first combined to obtain segment length L1, and the combined segment is then combined with segment 1 to obtain a final segment with length L2.
And step S33, selecting a square area with the bottom candidate line as the bottom and the length of the bottom candidate line as the side length as the recommended vehicle candidate area.
After the vehicle candidate region is obtained, the vehicle candidate region is output to a classifier module so as to accurately determine the position information of the vehicle in the image.
Referring to fig. 6, the vehicle candidate region recommendation system of the present invention includes a saliency analysis module 1, a binarization module 2, and a boundary merging module 3.
The saliency analysis module 1 is configured to pre-process the input image based on saliency analysis to obtain a target image including a vehicle image.
In the invention, an input image is acquired by image acquisition equipment and adopts a YUV format. Among them, YUV is a color coding method adopted by the european television system, in which "Y" represents brightness (Luma), i.e., a gray scale value; "U" and "V" denote Chroma (Chroma) and are used to describe the color and saturation of an image for specifying the color of a pixel. That is, Y-channel image information represents luminance information of an image, U-channel image information and V-channel image information represents color information of an image, and a color picture can be formed by combining the Y-channel image information, the U-channel image information and the V-channel image information. In the present invention, the input image is Y-channel image information subjected to sampling processing.
As shown in fig. 2, the significance analysis module 1 performs the following operations:
step S11, traversing the input image, acquiring the pixel value of each pixel in the input image, and recording the maximum pixel value and the minimum pixel value.
And step S12, calculating the sum of the distances from each pixel point to other pixel points, and recording the sum of the maximum distances and the sum of the minimum distances.
Preferably, the distance from each pixel point to each other pixel point in the input image is a euclidean distance, but is not limited to the euclidean distance. Euclidean distance is a commonly used definition of distance, referring to the true distance between two points in m-dimensional space, or the natural length of a vector (i.e., the distance of the point from the origin). The euclidean distance in two and three dimensions is the actual distance between two points.
In the invention, the sum of the distances from each pixel point to other pixel points is used as a measure for measuring the contrast of the pixel point.
Step S13, calculating the difference between the maximum distance sum and the minimum distance sum, and normalizing the distance sum corresponding to each pixel point according to the difference.
Specifically, the distance sum corresponding to each pixel point is linearly mapped to the range of 0-255 according to the difference value between the maximum distance sum and the minimum distance sum.
And after normalizing the sum of the distances corresponding to each pixel point according to the difference value of the maximum distance sum and the minimum distance sum, expanding the data obtained after normalization line by line to obtain an array for expressing the efficiency of each pixel value. That is, the data obtained by normalizing the rows are sequentially spliced together in the order from top to bottom to obtain an array with the size of 256. The indices of the array are 0-255.
Step S14, normalizing the input image to obtain a stretched image; the variation range of the pixel value of each pixel point after the normalization processing of the input image is the same as the variation range after the normalization processing of the sum of each distance.
Specifically, when the input image is subjected to normalization processing, the pixel values of the pixel points are linearly mapped to a range of 0-255, so that a stretched image is obtained.
And step S15, calculating the significance characteristic value of each pixel point based on the sum of the pixel value and the distance of each pixel point in the stretched image to obtain a significance analysis image.
Specifically, the pixel value of each pixel point in the stretched image is used as an index of an array for representing efficiency of each pixel value, and a corresponding value is searched from the array for representing efficiency of each pixel value and is used as a significant characteristic value of each pixel point. The significance characteristic value of each pixel point forms a significance analysis image.
And step S16, subtracting the stretched image from the saliency analysis image to obtain a target image.
The saliency analysis image and the stretched image are in the same value range space and have a mapping relationship. Moreover, the saliency analysis image enhances a region of the input image where the contrast is relatively strong, whereas the stretched image is an approximately balanced image. And obtaining the difference value between the saliency analysis image and the stretched image to obtain the target image. The target image eliminates the relatively non-enhanced area in the input image, the area with enhanced contrast, namely the area with enhanced contrast and including the vehicle image, is reserved, and other background areas are eliminated, so that the target image with prominent objects, namely the vehicle image, is obtained.
The binarization processing module 2 is connected with the significance analysis module 1 and is used for carrying out binarization processing on the target image to obtain a binarization image.
In order to make the contrast of the target image stronger, binarization processing is performed on the target image. The binary image has obvious black and white effect, and is more convenient for positioning the vehicle candidate area.
Specifically, as shown in fig. 3, the binarization processing module 2 performs binarization processing on the target image by:
step S21, the sum of the pixel sum of each pixel point on the target image and the left pixel point in the horizontal direction is subtracted from the sum of the pixel sum of the right pixel point in the horizontal direction, so as to obtain a first difference.
It should be noted that, when the sum of the pixels of the two pixels exceeds 255, the sum of the pixels is corrected to 255.
And step S22, subtracting the sum of the pixels of the upper and lower pixel points in the vertical direction from the sum of the pixels of each pixel point in the target image and the two pixel points in the upper and lower vertical directions to obtain a second difference value.
It should be noted that, when the sum of the pixels of the three pixels exceeds 255, the sum of the pixels is corrected to 255.
Step S23, when at least one of the first difference value and the second difference value is larger than a preset threshold value, setting the pixel value of the corresponding pixel point to be 1; otherwise, setting the pixel value of the corresponding pixel point to be 0.
Thus, a binary image can be obtained.
And the boundary merging module 3 is connected with the binarization processing module 2 and is used for performing gradient-based boundary merging on the binarization image to obtain a recommended vehicle candidate region.
As shown in fig. 4, the boundary merging module 3 performs the following operations:
and step S31, recording the line segment of each horizontal direction in the binary image.
Specifically, horizontal line segments are acquired from the binarized image, and the number of lines of each horizontal line segment in the binarized image in the vertical direction, and a start point and an end point in the horizontal direction are recorded.
Step S32, merging the adjacent or alternate horizontal line segments until the line segments are merged into one line segment; merging refers to moving the upper horizontal line segment downward to the lower horizontal line segment, and superimposing the lengths of the two line segments.
Specifically, in the vertical direction, the relationship between adjacent straight line segments is searched from low to high. And moving the straight line segments adjacent to the upper side by one or more lines downwards to the straight line segments below, so that the two straight line segments are possibly coincident. When the upper straight line segment and the lower straight line segment are combined, the length is the superposition of the lengths of the two straight line segments after movement. In addition, the number of lines between the upper and lower straight line segments is determined according to specific conditions.
For example, as shown in fig. 5, segment 2 and segment 3 are first combined to obtain segment length L1, and the combined segment is then combined with segment 1 to obtain a final segment with length L2.
And step S33, selecting a square area with the bottom candidate line as the bottom and the length of the bottom candidate line as the side length as the recommended vehicle candidate area.
After the vehicle candidate region is obtained, the vehicle candidate region is output to a classifier module so as to accurately determine the position information of the vehicle in the image.
As shown in fig. 7, the present invention further provides a server including the vehicle candidate region recommendation system, and the specific structure thereof is as described above, and therefore, the detailed description thereof is omitted here.
In summary, the vehicle candidate region recommendation method and system and the electronic device of the invention utilize significance analysis to obtain a target image with relatively rich information, perform boundary enhancement with tendentiousness in the horizontal and vertical directions by obtaining unequal gradients in the horizontal and vertical directions, and further provide tendentiousness edge materials for subsequently determining candidate base edges; and screening candidate bottom edges by using the edge-enhanced binary image, and further enhancing the characteristic range of the bottom edge candidate region by using a line dropping mechanism, so that the vehicle candidate region can be more accurately positioned. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (9)

1. A vehicle candidate region recommendation method characterized by: the method comprises the following steps:
preprocessing an input image based on significance analysis to obtain a target image containing a vehicle image;
carrying out binarization processing on the target image to obtain a binarized image;
carrying out gradient-based boundary merging on the binary image to obtain a recommended vehicle candidate region;
the preprocessing of the input image based on the saliency analysis to obtain the target image containing the vehicle image comprises the following steps:
traversing the input image to obtain the pixel value of each pixel in the input image;
calculating the sum of the distances from each pixel point to other pixel points, and recording the sum of the maximum distances and the sum of the minimum distances;
calculating the difference value of the maximum distance sum and the minimum distance sum, and carrying out normalization processing on the distance sum corresponding to each pixel point according to the difference value;
normalizing the input image to obtain a stretched image; the variation range of the pixel value of each pixel point after the normalization processing of the input image is the same as the variation range after the normalization processing of the sum of each distance;
calculating the significance characteristic value of each pixel point based on the sum of the pixel value and the distance of each pixel point in the stretched image to obtain a significance analysis image; the method comprises the steps that pixel values of all pixel points in a stretched image are used as indexes of arrays for expressing efficiency of all the pixel values, corresponding values are searched from the arrays for expressing the efficiency of all the pixel values, and the corresponding values are used as significance characteristic values of all the pixel points; the significance characteristic value of each pixel point forms a significance analysis image;
and subtracting the stretched image from the significance analysis image to obtain a target image.
2. The vehicle candidate region recommendation method according to claim 1, characterized in that: the input image is Y-channel image information subjected to sampling processing.
3. The vehicle candidate region recommendation method according to claim 1, characterized in that: performing binarization processing on the target image by the following steps:
subtracting the sum of the pixels of the pixel points on the left side in the horizontal direction from the sum of the pixels of the pixel points on the left side in the horizontal direction on the target image to obtain a first difference value;
subtracting the sum of the pixels of the upper pixel point and the lower pixel point in the vertical direction from the sum of the pixels of each pixel point and the two pixel points in the upper and lower vertical directions on the target image to obtain a second difference value;
when at least one of the first difference value and the second difference value is larger than a preset threshold value, setting the pixel value of the corresponding pixel point to be 1; otherwise, setting the pixel value of the corresponding pixel point to be 0.
4. The vehicle candidate region recommendation method according to claim 1, characterized in that: the gradient-based boundary merging is carried out on the binary image to obtain the recommended vehicle candidate region, and the method comprises the following steps:
recording line segments of each horizontal direction in the binary image;
combining adjacent or alternate horizontal line segments until the line segments are combined into one line segment; the merging refers to moving the upper horizontal line segment downwards to the lower horizontal line segment, and overlapping the lengths of the two line segments;
and selecting a square area with the bottom edge candidate line as the bottom edge and the length of the bottom edge candidate line as the side length as the recommended vehicle candidate area.
5. A vehicle candidate region recommendation system characterized by: the method comprises a significance analysis module, a binarization module and a boundary merging module;
the saliency analysis module is used for preprocessing an input image based on saliency analysis to obtain a target image containing a vehicle image;
the binarization processing module is used for carrying out binarization processing on the target image to obtain a binarization image;
the boundary merging module is used for carrying out gradient-based boundary merging on the binary image to obtain a recommended vehicle candidate region;
the significance analysis module performs the following operations:
traversing the input image to obtain the pixel value of each pixel in the input image;
calculating the sum of the distances from each pixel point to other pixel points, and recording the sum of the maximum distances and the sum of the minimum distances;
calculating the difference value of the maximum distance sum and the minimum distance sum, and carrying out normalization processing on the distance sum corresponding to each pixel point according to the difference value;
normalizing the input image to obtain a stretched image; the variation range of the pixel value of each pixel point after the normalization processing of the input image is the same as the variation range after the normalization processing of the sum of each distance;
calculating the significance characteristic value of each pixel point based on the sum of the pixel value and the distance of each pixel point in the stretched image to obtain a significance analysis image; the method comprises the steps that pixel values of all pixel points in a stretched image are used as indexes of arrays for expressing efficiency of all the pixel values, corresponding values are searched from the arrays for expressing the efficiency of all the pixel values, and the corresponding values are used as significance characteristic values of all the pixel points; the significance characteristic value of each pixel point forms a significance analysis image;
and subtracting the stretched image from the significance analysis image to obtain a target image.
6. The vehicle candidate region recommendation system according to claim 5, characterized in that: the input image is Y-channel image information subjected to sampling processing.
7. The vehicle candidate region recommendation system according to claim 5, characterized in that: the binarization processing module executes the following operations:
subtracting the sum of the pixels of the pixel points on the left side in the horizontal direction from the sum of the pixels of the pixel points on the left side in the horizontal direction on the target image to obtain a first difference value;
subtracting the sum of the pixels of the upper pixel point and the lower pixel point in the vertical direction from the sum of the pixels of each pixel point and the two pixel points in the upper and lower vertical directions on the target image to obtain a second difference value;
when at least one of the first difference value and the second difference value is larger than a preset threshold value, setting the pixel value of the corresponding pixel point to be 1; otherwise, setting the pixel value of the corresponding pixel point to be 0.
8. The vehicle candidate region recommendation system according to claim 5, characterized in that: the boundary merging module performs the following operations:
recording line segments of each horizontal direction in the binary image;
combining adjacent or alternate horizontal line segments until the line segments are combined into one line segment; the merging refers to moving the upper horizontal line segment downwards to the lower horizontal line segment, and overlapping the lengths of the two line segments;
and selecting a square area with the bottom edge candidate line as the bottom edge and the length of the bottom edge candidate line as the side length as the recommended vehicle candidate area.
9. An electronic device, characterized in that: comprising the vehicle candidate region recommendation system of one of claims 5-8.
CN201710153509.3A 2017-03-15 2017-03-15 Vehicle candidate area recommendation method and system and electronic equipment Active CN106951898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710153509.3A CN106951898B (en) 2017-03-15 2017-03-15 Vehicle candidate area recommendation method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710153509.3A CN106951898B (en) 2017-03-15 2017-03-15 Vehicle candidate area recommendation method and system and electronic equipment

Publications (2)

Publication Number Publication Date
CN106951898A CN106951898A (en) 2017-07-14
CN106951898B true CN106951898B (en) 2021-06-04

Family

ID=59471978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710153509.3A Active CN106951898B (en) 2017-03-15 2017-03-15 Vehicle candidate area recommendation method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN106951898B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960982A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle detecting system and device based on contrast and significance analysis
CN109961420A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle checking method based on more subgraphs fusion and significance analysis
CN109960978A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle detecting system and device based on image layered technology
CN109960977B (en) * 2017-12-25 2023-11-17 大连楼兰科技股份有限公司 Saliency preprocessing method based on image layering
CN109960984A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle checking method based on contrast and significance analysis
CN109960981A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Left and right vehicle wheel boundary alignment system and device based on gradient and picture contrast
CN109960979A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle checking method based on image layered technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156882A (en) * 2011-04-14 2011-08-17 西北工业大学 Method for detecting airport target based on high-resolution remote sensing image
CN104778713A (en) * 2015-04-27 2015-07-15 清华大学深圳研究生院 Image processing method
CN105118084A (en) * 2015-09-10 2015-12-02 天津大学 Depth perception enhancement method based on significance
CN106295636A (en) * 2016-07-21 2017-01-04 重庆大学 Passageway for fire apparatus based on multiple features fusion cascade classifier vehicle checking method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102087652A (en) * 2009-12-08 2011-06-08 百度在线网络技术(北京)有限公司 Method for screening images and system thereof
US9013430B2 (en) * 2010-08-20 2015-04-21 University Of Massachusetts Hand and finger registration for control applications
JP5170226B2 (en) * 2010-12-10 2013-03-27 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
CN104504684B (en) * 2014-12-03 2017-05-24 小米科技有限责任公司 Edge extraction method and device
CN106203267A (en) * 2016-06-28 2016-12-07 成都之达科技有限公司 Vehicle collision avoidance method based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156882A (en) * 2011-04-14 2011-08-17 西北工业大学 Method for detecting airport target based on high-resolution remote sensing image
CN104778713A (en) * 2015-04-27 2015-07-15 清华大学深圳研究生院 Image processing method
CN105118084A (en) * 2015-09-10 2015-12-02 天津大学 Depth perception enhancement method based on significance
CN106295636A (en) * 2016-07-21 2017-01-04 重庆大学 Passageway for fire apparatus based on multiple features fusion cascade classifier vehicle checking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Lane Departure Identification for Advanced Driver Assistance;Vijay Gaikwad等;《 IEEE Transactions on Intelligent Transportation Systems》;20141208;第16卷(第2期);910-918 *

Also Published As

Publication number Publication date
CN106951898A (en) 2017-07-14

Similar Documents

Publication Publication Date Title
CN106951898B (en) Vehicle candidate area recommendation method and system and electronic equipment
CN109034047B (en) Lane line detection method and device
CN107729818B (en) Multi-feature fusion vehicle re-identification method based on deep learning
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
Siriborvornratanakul An automatic road distress visual inspection system using an onboard in-car camera
US9269001B2 (en) Illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
US8902053B2 (en) Method and system for lane departure warning
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
KR20210078530A (en) Lane property detection method, device, electronic device and readable storage medium
CN111626277B (en) Vehicle tracking method and device based on over-station inter-modulation index analysis
JP2008046903A (en) Apparatus and method for detecting number of objects
Yang et al. A vehicle license plate recognition system based on fixed color collocation
CN105825495A (en) Object detection apparatus and object detection method
CN103093198A (en) Crowd density monitoring method and device
CN102194102A (en) Method and device for classifying a traffic sign
CN109460787A (en) IDS Framework method for building up, device and data processing equipment
CN101383005A (en) Method for separating passenger target image and background by auxiliary regular veins
CN110751012A (en) Target detection evaluation method and device, electronic equipment and storage medium
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111915583A (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
Bi et al. A new method of target detection based on autonomous radar and camera data fusion
Bulugu Algorithm for license plate localization and recognition for tanzania car plate numbers
CN105160324B (en) A kind of vehicle checking method based on space of components relationship
CN111009136A (en) Method, device and system for detecting vehicles with abnormal running speed on highway
CN108268866B (en) Vehicle detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant