Summary of the invention
In order to solve the problem of prior art, the invention provides a kind of vehicle checking method based on space of components relation, the described vehicle checking method based on space of components relation, comprising:
Obtain video image, determine the distance value between the car plate parts of vehicle sample in described video image and car light parts and angle value, set up according to described distance value and described angle value combination algorithm the gauss hybrid models representing spatial relationship between car plate parts and car light parts;
Use direct linear transformation to demarcate described video image, obtain perspective transformation matrix;
Build inverse projecting plane, according to described perspective transformation matrix, the information in described video image is carried out inverse projection, obtain inverse perspective view;
According to color conversion model, in conjunction with gradient algorithm, from described inverse perspective view, determine described car plate parts and described car light parts;
If the described car plate parts in described inverse perspective view and the number of described car light parts and be not less than preset value, then obtain described car plate parts and the likelihood score of described car light parts in described gauss hybrid models, if the numerical value of described likelihood score is greater than likelihood score threshold value, then determine that described car plate parts and described car light parts are the parts of a car, and then determine number of vehicles.
Optionally, described use direct linear transformation demarcates described video image, obtains perspective transformation matrix, comprising:
In world coordinate system, choose the calibration point of six known coordinates, wherein four described calibration points are positioned at surface level, and the height value of another two described calibration points is non-vanishing;
Obtain the image coordinate of calibration point in image coordinate system of described six known coordinates, obtain the perspective transformation matrix between described world coordinate system and described image coordinate system by least square method.
Optionally, the information in described video image, against projecting plane, is carried out inverse projection according to described perspective transformation matrix by described structure, obtains inverse perspective view, comprising:
Pane location is chosen on described inverse projecting plane, determines the sampled pixel point that each described pane location is corresponding on described video image;
Described video image is converted to gray level image, and obtains the gray-scale value of described sampled pixel point;
Carry out inverse projection according to described perspective transformation matrix by mutually described for the gray-scale value of described sampled pixel point inverse projecting plane, obtain the inverse perspective view after inverse projection.
Optionally, described according to color conversion model, in conjunction with gradient algorithm, from described inverse perspective view, determine described car plate parts and described car light parts, comprising:
By the car plate color model in color conversion model, obtain the car plate gray-scale map of described video image, and then obtain the car plate gradient map of described car plate gray-scale map;
The car plate sample areas identical with car plate size is chosen from described car plate gradient map, obtain the average gradient value of all pixels in described car plate sample areas, if described average gradient value is greater than predetermined gradient threshold value, then determine that described car plate sample areas is license plate area, and determine described car plate parts by connected component labeling method in described license plate area;
By the car light color model in described color conversion model, obtain the car light gray-scale map of described video image, and then obtain the binary image of described car light gray-scale map in conjunction with binary-state threshold;
Obtain the white portion in described binary image, in described white portion, determine described car light parts by described connected component labeling method.
Optionally, if the described car plate parts in described inverse perspective view and the number of described car light parts and be not less than preset value, then obtain described car plate parts and the likelihood score of described car light parts in described gauss hybrid models, if the numerical value of described likelihood score is greater than likelihood score threshold value, then determine that described car plate parts and described car light parts are the parts of a car, and then determine number of vehicles, comprising:
Obtain the parts such as described car plate parts in described inverse perspective view and described car number and;
If described number and be not less than preset value, then obtain the distance value between the parts such as described car plate parts in described inverse perspective view and described car and angle value, and then determine described distance value and the likelihood score of described angle value in described gauss hybrid models;
If the numerical value of described likelihood score is greater than likelihood score threshold value, then determines that described car plate parts and described car light parts are the parts of a car, and then determine number of vehicles.
Optionally, the described vehicle checking method based on space of components relation, also comprises:
If the numerical value of described likelihood score is less than described likelihood score threshold value, then the quantity because of described car light parts and described car plate parts is very few, cannot determine vehicle detection result.
The beneficial effect that technical scheme provided by the invention is brought is:
By introducing gauss hybrid models in vehicle detection process, compared with prior art, can overcome block and illumination condition difference on the impact of algorithm of target detection, improve the accuracy of vehicle identification.
Embodiment
For making structure of the present invention and advantage clearly, below in conjunction with accompanying drawing, structure of the present invention is further described.
Embodiment one
The invention provides a kind of vehicle checking method based on space of components relation, as shown in Figure 1, the described vehicle checking method based on space of components relation, comprising:
11, video image is obtained, determine the distance value between the car plate parts of vehicle sample in described video image and car light parts and angle value, set up according to described distance value and described angle value combination algorithm the gauss hybrid models representing spatial relationship between car plate parts and car light parts;
12, use direct linear transformation to demarcate described video image, obtain perspective transformation matrix;
13, build inverse projecting plane, according to described perspective transformation matrix, the information in described video image is carried out inverse projection, obtain inverse perspective view;
14, according to color conversion model, in conjunction with gradient algorithm, from described inverse perspective view, described car plate parts and described car light parts are determined;
If the described car plate parts in 15 described inverse perspective views and the number of described car light parts and be not less than preset value, then obtain described car plate parts and the likelihood score of described car light parts in described gauss hybrid models, if the numerical value of described likelihood score is greater than likelihood score threshold value, then determine that described car plate parts and described car light parts are the parts of a car, and then determine number of vehicles.
In force, in order to solve the defect to the vehicle identification difficulty on road in prior art, present applicant proposes a kind of vehicle checking method based on space of components relation, the particular content of the method is as follows:
First, obtain the video image with vehicle to be detected, determine the distance value length between the car plate parts of vehicle sample in this video image and car light parts and angle value angle, set up the gauss hybrid models (GMM) representing spatial relationship between car plate parts and car light parts according to described distance value length and described angle value angle in conjunction with EM algorithm.Here the spatial relationship between car plate parts and car light parts can reference diagram 2, and wherein LR region refers to left car light, and RR region refers to right car light, and LP region then refers to car plate.D refers to the distance value length between two parts, and θ then refers to the angle value angle between two parts.
It should be noted that EM algorithm used herein is a kind of iterative algorithm for (Expection-Maximizationalgorithm), walk two large iterative steps by E step and M, each iteration all makes maximum likelihood function increase.Concrete:
(1) by arranging initialization value, obtain the value making likelihood equation maximum, this step is called E-step (E-step)
(2) utilize the value obtained, upgrade.This step is called M-step (M-step).
And Gauss model (GMM, Gaussianmixturemodel) be exactly accurately quantize things with Gaussian probability-density function (normal distribution curve), a things is decomposed into some models formed based on Gaussian probability-density function (normal distribution curve).Image background is set up to principle and the process of Gauss model: image grey level histogram reflection be the frequency that in image, certain gray-scale value occurs, also can think it is the estimation of gradation of image probability density.If the target area that image comprises and background area differ larger, and there is certain difference background area and target area in gray scale, so the grey level histogram of this image presents bimodal-paddy shape, and one of them peak corresponds to target, and another peak corresponds to the center gray scale of background.
Then, use direct linear transformation to demarcate described video image, obtain perspective transformation matrix;
Here direct linear transformation is also called DirectLinearTransformation, is called for short DLT.It is the algorithm setting up direct linear relationship between picpointed coordinate instrument and corresponding object point object space coordinate.There is following features: do not need internal and external orientation, be suitable for non-metric camera, in meeting, the measurement task of low precision.
Secondly, build inverse projecting plane, so that the perspective transformation matrix obtained according to back is by inverse for the information type in video image projection, thus get inverse perspective view.
In actual photographed process, the scene graph that video camera photographs is the projection of three dimensional spatial scene at two-dimensional space, in the process utilizing machine vision to identify the vehicle that road and road travel, need a kind of Converse solved process, be namely reduced into the pavement image overlooked from the two dimensional image obtained.According to the result after above-mentioned transformation by reciprocal direction, the depth information of road can be got, can better provide pavement of road situation, provide vehicle to travel reference information more easily.This is also the theoretical foundation of vehicle checking method provided by the invention.
Again, on the basis of the inverse perspective view got at back, according to color conversion model and in conjunction with gradient algorithm, from described inverse perspective view, determine detailed car plate parts and car light parts.The particular content of this step carries out labor below, repeats no more herein.
Finally, the car plate parts determine back and the quantity of car light parts gather, if the described car plate parts in described inverse perspective view and the number of described car light parts and be not less than preset value, then obtain described car plate parts and the likelihood score of described car light parts in described gauss hybrid models, if the numerical value of described likelihood score is greater than likelihood score threshold value, then determine that described car plate parts and described car light parts are the parts of a car, and then determine number of vehicles.Equally, detailed content is analyzed below.
The invention provides a kind of vehicle checking method based on space of components relation, comprise and set up gauss hybrid models, obtain perspective transformation matrix, build inverse projecting plane, according to described perspective transformation matrix, the information in described video image is carried out inverse projection, obtain inverse perspective view, described car plate parts and described car light parts are determined from described inverse perspective view, if the described car plate parts in described inverse perspective view and the number of described car light parts and be not less than preset value, then determining that described car plate parts and described car light parts determine number of vehicles after being the parts of a car.By introducing gauss hybrid models in vehicle detection process, compared with prior art, can overcome block and illumination condition difference on the impact of algorithm of target detection, improve the accuracy of vehicle identification.
Optionally, described use direct linear transformation demarcates described video image, obtains perspective transformation matrix, comprising:
In world coordinate system, choose the calibration point of six known coordinates, wherein four described calibration points are positioned at surface level, and the height value of another two described calibration points is non-vanishing;
Obtain the image coordinate of calibration point in image coordinate system of described six known coordinates, obtain the perspective transformation matrix between described world coordinate system and described image coordinate system by least square method.
In force, composition graphs 3, camera calibration method in above-mentioned steps is illustrated, camera calibration needs the calibration point of six its world coordinate systems known, wherein four calibration points in the horizontal plane, and as the P0 ~ P3 point in Fig. 3, two other calibration point is not in the horizontal plane, that is height is in space not equal to 0, as the P4 ~ P5 point in Fig. 4.
Refer to the projection relation of three dimensions Scene to the plane of delineation at the concrete imaging model used, the three-dimensional scene projection in objective world relates to some coordinate systems to the two dimensional image that video camera photographs:
(1) world coordinate system, also referred to as true or real coordinate system, is the absolute coordinate system of objective world.
(2) camera coordinate system take video camera as the coordinate system of true origin shearing.
(3) image coordinate system is the plane coordinate system formed in the image photographed at video camera.
(4) computer picture coordinate system is the coordinate system used in computer-internal digital picture.
Use first kind world coordinate system and the 3rd class image coordinate system in this step, by the coordinate of known point in world coordinate system in image coordinate system, and getting the transition matrix between image coordinate system and world coordinate system in conjunction with least square method, this transition matrix is perspective transformation matrix.This perspective transformation matrix is used for the inverse projection transform in subsequent step.
Optionally, the information in described video image, against projecting plane, is carried out inverse projection according to described perspective transformation matrix by described structure, obtains inverse perspective view, comprising:
Pane location is chosen on described inverse projecting plane, determines the sampled pixel point that each described pane location is corresponding on described video image;
Described video image is converted to gray level image, and obtains the gray-scale value of described sampled pixel point;
Carry out inverse projection according to described perspective transformation matrix by mutually described for the gray-scale value of described sampled pixel point inverse projecting plane, obtain the inverse perspective view after inverse projection.
In force, in order to the information in video image be carried out according to perspective transformation matrix the process of inverse projection, need to carry out following steps:
(1) on inverse projecting plane using the lattice of 5cm × 5cm as pane location, and this pane location is projected in plane residing for video image, so that determine the image-region that each pane location is corresponding on the video images, and then determine the sampled pixel point that comprises in this image-region.
(2) video image is converted to gray level image, from described gray level image, determines the gray-scale value of the sampled pixel point that back gets.
(3) perspective transformation matrix obtained before the gray-scale value combination according to the sampled pixel point got, the inverse projection of type on inverse projecting plane, thus get the inverse perspective view after inverse projection.Square frame in Fig. 4 (a) is the inverse projecting plane built, Fig. 4 (b) is carried out by video image being parallel to the inverse perspective view 2 that the inverse projecting plane of Y-O-Z plane builds in inverse projection process to three dimensions for being carried out by video image being parallel to the inverse perspective view 1, Fig. 4 (c) that the inverse projecting plane of X-O-Z plane builds in inverse projection process to three dimensions.
By the Inverse projection of this step, achieve the step projected on inverse projecting plane by video image, so that according to the vehicle identification step realizing this projecting plane and carry out in subsequent step.
Optionally, described according to color conversion model, in conjunction with gradient algorithm, from described inverse perspective view, determine described car plate parts and described car light parts, comprising:
By the car plate color model in color conversion model, obtain the car plate gray-scale map of described video image, and then obtain the car plate gradient map of described car plate gray-scale map;
The car plate sample areas identical with car plate size is chosen from described car plate gradient map, obtain the average gradient value of all pixels in described car plate sample areas, if described average gradient value is greater than predetermined gradient threshold value, then determine that described car plate sample areas is license plate area, and determine described car plate parts by connected component labeling method in described license plate area;
By the car light color model in described color conversion model, obtain the car light gray-scale map of described video image, and then obtain the binary image of described car light gray-scale map in conjunction with binary-state threshold;
Obtain the white portion in described binary image, in described white portion, determine described car light parts by described connected component labeling method.
In force, in order to obtain the image of car plate parts accurately, need to carry out following steps:
By the car plate color model in default color conversion model, video image is transformed, obtain the car plate gray-scale map after transforming, and then obtain the car plate gradient map of this car plate gray-scale map.Then from this car plate gradient map, the car plate sample areas suitable with car plate area is chosen, and then obtain the average gradient value of this car plate sample areas, so that in conjunction with predetermined gradient threshold value determination license plate area, determine car plate parts accurately by connected component labeling method in the license plate area determined.
Fig. 5 (a) is raw video image, the inverse perspective view of Fig. 5 (b) for obtaining after Inverse projection, Fig. 5 (c) is the car plate gray-scale map after conversion, the car plate gradient map of Fig. 5 (d) for getting, the average gradient value image that Fig. 5 (e) is car plate sample areas, the sharp picture of car plate parts of Fig. 5 (f) for obtaining.
In order to obtain the image of car light parts accurately, need to carry out following steps:
By the car light color model in default color conversion model, video image is transformed, obtain the car light gray-scale map after transforming, then obtained the binary image of car light gray-scale map by binary conversion treatment.Extract the white portion in this binary image, in white portion, determine described car light parts by described connected component labeling method.
Fig. 6 (a) is raw video image, and Fig. 6 (b) is the car plate gray-scale map after transforming, and Fig. 6 (d) is the binary image after Threshold segmentation, the sharp picture of car plate parts of Fig. 6 (e) for obtaining.
By above-mentioned steps, accurately can obtain the car plate parts in video image and car light parts, thus lay the foundation for carrying out vehicle identification accurately in subsequent step.
Optionally, if the described car plate parts in described inverse perspective view and the number of described car light parts and be not less than preset value, then obtain described car plate parts and the likelihood score of described car light parts in described gauss hybrid models, if the numerical value of described likelihood score is greater than likelihood score threshold value, then determine that described car plate parts and described car light parts are the parts of a car, and then determine number of vehicles, comprising:
Obtain the parts such as described car plate parts in described inverse perspective view and described car number and;
If described number and be not less than preset value, then obtain the distance value between the parts such as described car plate parts in described inverse perspective view and described car and angle value, and then determine described distance value and the likelihood score of described angle value in described gauss hybrid models;
If the numerical value of described likelihood score is greater than likelihood score threshold value, then determines that described car plate parts and described car light parts are the parts of a car, and then determine number of vehicles.
In force, get accurately after car plate parts and car light parts at back, the quantity of the two is gathered, if number and be not less than preset value, then obtain described car plate parts and the likelihood score of described car light parts in described gauss hybrid models, if the numerical value of described likelihood score is greater than likelihood score threshold value, then determine that described car plate parts and described car light parts are the parts of a car, thus the judgement completed a car, in video image, repeat above-mentioned steps, and then determine overall vehicle number.
By above-mentioned steps, the vehicle on road is identified, relative to scheme of the prior art, the accuracy of vehicle identification can be improved significantly.
Optionally, the described vehicle checking method based on space of components relation, also comprises:
If the numerical value of described likelihood score is less than described likelihood score threshold value, then the quantity because of described car light parts and described car plate parts is very few, cannot determine vehicle detection result.
In force, get accurately after car plate parts and car light parts at back, the quantity of the two is gathered, if the number gathered is less than preset value, then can be less than likelihood score threshold value because of the very few numerical value of likelihood score that causes of number of components, thus cannot determine whether above-mentioned parts belong to same car, also just can not realize vehicle identification accurately.
The invention provides a kind of vehicle checking method based on space of components relation, comprise and set up gauss hybrid models, obtain perspective transformation matrix, build inverse projecting plane, according to described perspective transformation matrix, the information in described video image is carried out inverse projection, obtain inverse perspective view, described car plate parts and described car light parts are determined from described inverse perspective view, if the described car plate parts in described inverse perspective view and the number of described car light parts and be not less than preset value, then determining that described car plate parts and described car light parts determine number of vehicles after being the parts of a car.By introducing gauss hybrid models in vehicle detection process, compared with prior art, can overcome block and illumination condition difference on the impact of algorithm of target detection, improve the accuracy of vehicle identification.
It should be noted that: the vehicle checking method based on space of components relation that above-described embodiment provides carries out the embodiment of vehicle detection, only as the explanation in actual applications of this vehicle checking method, can also use in other application scenarioss according to actual needs and by above-mentioned vehicle checking method, its specific implementation process is similar to above-described embodiment, repeats no more here.
Each sequence number in above-described embodiment, just to describing, does not represent the sequencing in the assembling of each parts or use procedure.
The foregoing is only embodiments of the invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.