CN102044151B - Night vehicle video detection method based on illumination visibility identification - Google Patents

Night vehicle video detection method based on illumination visibility identification Download PDF

Info

Publication number
CN102044151B
CN102044151B CN201010505792A CN201010505792A CN102044151B CN 102044151 B CN102044151 B CN 102044151B CN 201010505792 A CN201010505792 A CN 201010505792A CN 201010505792 A CN201010505792 A CN 201010505792A CN 102044151 B CN102044151 B CN 102044151B
Authority
CN
China
Prior art keywords
mrow
msubsup
msub
vehicle
math
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010505792A
Other languages
Chinese (zh)
Other versions
CN102044151A (en
Inventor
胡宏宇
曲昭伟
李志慧
宋现敏
陈永恒
魏巍
江晟
胡金辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201010505792A priority Critical patent/CN102044151B/en
Publication of CN102044151A publication Critical patent/CN102044151A/en
Application granted granted Critical
Publication of CN102044151B publication Critical patent/CN102044151B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a night vehicle video detection method based on illumination visibility identification, which comprises the following steps of: 1) collecting a night traffic scene video image: using a camera lens to compress the collected traffic scene video image into MPEG (moving picture experts group) format and further transmitting to a computer for storage; 2) identifying the night illumination mode: determining whether the mode is the night mode with street lamps or the night mode without the street lamps; 3) carrying out vehicle detection under the night mode without the street lamps or carrying out the vehicle detection under the night mode with the street lamps; 4) tracking the motion of a night vehicle: utilizing the kalman filtering algorithm to carry out motion tracking on a matched vehicle headlamp (under the night mode without the street lamps) or the whole vehicle (under the night mode with the street lamps), obtaining the motion state of the vehicle and realizing the continuous and fast tracking of the motion of the vehicle; and 5) extracting a traffic parameter of the night vehicle: and adopting the two-dimensional reconstruction algorithm based on black box calibration to realize the extraction of a running speed parameter of the vehicle according to a projection relationship model between an image coordinate and a world coordinate.

Description

Night vehicle video detection method based on illumination visibility identification
Technical Field
The invention relates to a detection method for night motor vehicles in an urban traffic management control system, in particular to a night vehicle video detection method based on illumination visibility identification.
Background
With the rapid development of urban informatization and intellectualization and the continuous update of computer image processing and communication transmission technology, video detection is widely applied to urban traffic systems, and the video detection technology can be used for realizing automatic detection, identification, tracking, parameter acquisition, behavior analysis and the like of moving objects, further realizing real-time monitoring of traffic events and traffic states, and has important significance for optimizing a traffic management control method and guaranteeing urban road traffic safety.
At present, video detection systems are developed at home and abroad and put into practical application, but most of the systems have good detection effect in the daytime environment and relatively low vehicle detection accuracy in the nighttime complex environment. The main difficulty of night video detection lies in that the light at night is not enough, and the visibility is lower, especially under the condition of no street lamp, the automobile body profile is difficult to distinguish, and the interference that the road was received ambient light at night is great, has increaseed the degree of difficulty that vehicle detected at night. Chinese patent publication No. CN101382997, published as 2009, 3 and 11, and application No. 20081011067317, entitled method and apparatus for detecting and tracking a vehicle at night, which proposes a method and apparatus for detecting and tracking a vehicle at night by selecting a suitable detection area, performing minimum pixelization processing on the outer side of a lane line, removing noise on the outer side of the lane line, selecting an optimal binarization threshold, then extracting vehicle light information through template operations of binarization processing and digital image processing, and finally completing extraction and tracking of the vehicle light information to complete detection and tracking of the vehicle at night. Above-mentioned patent application file selects the car light to be the eigenvalue, and when there is not the street lamp night, the visibility is low, and the car light becomes obvious eigenvalue, can reach fine effect, but under the condition that has the street lamp, the illumination visibility is higher, and ambient environment's light interference is great, and the car light characteristic is not obvious. In addition, when the vehicle lamp is tracked, an error occurs in the process of extracting the perimeter of the vehicle lamp outline, and only the non-identical vehicle is determined to be inaccurate according to the error. In view of this, the motor vehicle traffic flow night detection technology based on video monitoring is a difficult problem, and at present, research on this aspect at home and abroad is relatively few, and one method is to extract moving vehicles to further acquire traffic flow parameters by depending on detection methods (such as an interframe difference method, a background difference method and the like) suitable for the daytime; one approach employs detecting vehicle headlights to effect vehicle identification. These methods are difficult to adapt to the change of light brightness in the environment with or without street lamps at night, and cannot ensure the high precision of the detection of the traffic flow parameters of the motor vehicles at night. Therefore, the existing night vehicle video detection method has a large gap from the practical detection requirement.
Disclosure of Invention
The invention aims to solve the technical problems that the prior art is difficult to adapt to the light brightness change under the environment with or without a street lamp at night and cannot ensure the high precision of the night motor vehicle traffic flow parameter detection, and provides a night vehicle video detection method based on illumination visibility identification.
In order to solve the technical problems, the invention is realized by adopting the following technical scheme: the night vehicle video detection method based on illumination visibility identification comprises the following steps:
1. night traffic scene video image acquisition
Installing camera lenses above the road with the street lamps and the road without the street lamps respectively, compressing the collected video images of the traffic scene into an MPEG format and transmitting the MPEG format to a computer for storage, wherein the camera lenses are 8-12 meters away from the road surface and are positioned at a position vertical to the road surface;
2. night light pattern recognition
Determining whether the mode is a night street lamp-free mode or a night street lamp mode;
3. carrying out vehicle detection in a night non-street lamp mode or in a night street lamp mode;
4. vehicle motion tracking at night
The matched vehicle head lamp is subjected to motion tracking by utilizing a kalman filtering algorithm, the motion state of the vehicle is obtained, and continuous and rapid vehicle motion tracking is realized;
5. night vehicle traffic parameter extraction
And extracting the vehicle running speed parameter by adopting a two-dimensional reconstruction algorithm based on black box calibration according to a projection relation model of the image coordinate and the world coordinate.
The night illumination mode identification in the technical scheme comprises the following steps:
1. background extraction based on cluster recognition
For an acquired video image sequence, acquiring a background image in a scene by using a background extraction algorithm of cluster identification, namely, constructing a background subset by using a non-overlapping stable sequence on a searched image pixel time sequence and restricting the pixel value variation to realize the extraction of the background image.
2. Eigenvalue selection
Selecting the brightness I of the HSI color space as 1/3(R + G + B) to indirectly express the illumination visibility, marking a rectangular region of interest from the lane position in the background image, and taking the standard deviation and the mean value of the region of interest as the characteristic index of illumination visibility evaluation, wherein: r, G and B represent the colors of the three channels red, green and blue in the image, respectively.
3. Night illumination mode classification model establishment
1) The method comprises the steps of collecting video sequences of two traffic scenes, namely a street lamp traffic scene and a street lamp traffic scene at night in different time periods in advance, extracting standard deviation and mean value of brightness I of HSI color space in a selected region of interest respectively, conducting off-line training on collected illumination information index data samples by using an SVM, and constructing an SVM scene recognition model based on illumination visibility.
2) And inputting illumination information index data of the traffic scene to be detected into a scene recognition model which is obtained by off-line learning and is based on illumination visibility, and dividing the scene recognition model into a street lamp illumination mode or a street lamp illumination-free mode.
The vehicle detection in the night street lamp-free mode in the technical scheme comprises the following steps:
1. headlight extraction by background difference method and binary processing
The method comprises the steps of reserving the remarkable characteristics of the vehicle headlight, eliminating the influence of other light sources, detecting the running vehicle by using the characteristics of the headlight, carrying out image segmentation by a background difference method, selecting a proper threshold value for binarization processing, and extracting to obtain the headlight.
2. Detection precision is improved by applying morphological filtering and mathematical morphology processing
In order to extract the headlights to the maximum extent, the binary image is processed by using morphological filtering, so that a few small noise points are removed, and the influence is reduced;
3. based on an 8-connected region method, a region growing algorithm based on template scanning is adopted to obtain a correct moving target and improve the precision of matching and tracking;
4. vehicle detection in a no-road-light mode using vehicle headlamp matching functions
For each headlamp in the detection area, selecting the best match according to a matching distance criterion, finding the headlamp which minimizes the matching function, and determining the headlamp to be the same vehicle, wherein the specific matching function is as follows:
|Ai-Aj|≤ε
|Yi-Yj|≤φ
Figure BSA00000301355000031
wherein: a is the area of the headlight area, Y is the coordinate of the camera in the vertical direction, X is the coordinate of the camera in the horizontal direction, epsilon, phi,
Figure BSA00000301355000032
gamma is a constraint threshold value set in advance;
and (3) using the horizontal distance between the two headlights of each matched vehicle and the average duty ratio of the headlight area as classification features, setting a reasonable classification threshold according to offline vehicle feature acquisition, judging the vehicle types on line, and dividing the vehicles into large-sized vehicles, medium-sized vehicles and small-sized vehicles.
The method based on 8 connected regions in the technical scheme adopts a region growing algorithm based on template scanning to obtain a correct moving target and improve the matching and tracking precision, and comprises the following steps:
1. an n × n scanning template is established, the foreground image is divided into p × q n × n sub-regions, and a p × q foreground region marking matrix M is established.
2. Scanning p multiplied by q sub-areas on the image one by one according to the sequence from left to right and from top to bottom, and when the number of foreground points in the ith row and the j column area is more than that of the foreground points in the ith row and the j column area
Figure BSA00000301355000033
When M (i, j) is 1, otherwise, M (i, j) is 0, and the connected regions are searched for the marking matrix M in the order from left to right and from top to bottom.
3. When M (i, j) is 0, searching the next point in sequence; when M (i, j) ═ 1, it is used as a seed.
4. For the seed point M (i, j), modifying M (i, j) to 0, and searching a connected region thereof, the rule is as follows: and (4) sequentially judging the values of eight points in the neighborhood, if the upper left point is 1, taking the point as a seed point, executing the step 4, otherwise, judging the next point, repeating the steps until the neighborhood has no connected point, taking the connected area as a target area, and executing the step 3.
The vehicle detection under the night streetlight mode in the technical scheme comprises the following steps:
1. eliminating influencing factors
The contour of the vehicle body is extracted as a characteristic value, the reflection intensity of the projection light beam of the headlight and the road surface is calculated by utilizing a Retinex algorithm, and then the influence is eliminated.
2. Vehicle detection
And a template scanning-based region growing algorithm is adopted to realize multi-region segmentation of the foreground image, and a Kalman filtering-based model is adopted to realize moving vehicle detection in a streetlight mode.
3. Vehicle type judgment based on moment characteristics
The method comprises the steps of realizing judgment of vehicle types in a street lamp mode based on a target feature expression method of the moment features, namely constructing a multi-type vehicle training sample library, extracting eccentric moment features, constructing a vehicle type classification model based on an SVM (support vector machine), inputting vehicle features of a traffic scene to be detected into a vehicle type identification model obtained by off-line learning, identifying the vehicle types on line, and dividing the vehicle types into large-sized vehicles, medium-sized vehicles and small-sized vehicles.
The method for judging the vehicle type based on the moment characteristics comprises the following steps:
1. definition of eccentricity
Assuming that there is a spatially discretized image f (x, y) of size M × N, its pq order moment is defined as:
<math> <mrow> <msub> <mi>m</mi> <mi>pq</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mi>i</mi> <mi>p</mi> </msup> <msup> <mi>j</mi> <mi>q</mi> </msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </math>
then, for the binary image after segmentation, the pixel value f (i, j) of any pixel point in the foreground target region R is 1, and the (p, q) th order moment in R has the following expression:
<math> <mrow> <msub> <mi>m</mi> <mi>pq</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>R</mi> </mrow> <mi>M</mi> </munderover> <mover> <mi>&Sigma;</mi> <mi>N</mi> </mover> <msup> <mi>i</mi> <mi>p</mi> </msup> <msup> <mi>j</mi> <mi>q</mi> </msup> </mrow> </math>
m can be calculated respectively by the above formula00、m01、m10Wherein m is00Number of pixels representing region R, i.e. area, m10,m01Then represents the central moment, then the C coordinate (x) of the center of gravity of the region Rc,yc) Can be expressed as:
<math> <mrow> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>10</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>01</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> </mrow> </math>
in the formula, n is the number of pixel points in the region R,
according to the connected characteristic of the foreground region, sequencing the region contour point set in a counterclockwise order and representing the region contour point set in a vector form: p (p)1,p2,p3…pm) Then each element p in the inventive vectoriThe distance from the region center of gravity C is defined as piEccentric moment d ofiFrom this vector p (p)1,p2,p3…pm) The distance between each element and the center of gravity can form an eccentricity sequence, and the eccentricity sequence is defined as an eccentricity vector of the foreground target area:
D(d1,d2,d3…dm)T
<math> <mrow> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>dist</mi> <mrow> <mo>(</mo> <mi>C</mi> <mo>,</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
2. eccentricity moment vector normalization
From the original vector D (D)1,d2,d3…dm)TAccording to the formula
D′(d1,d2,d3…dm)T
<math> <mrow> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>D</mi> <mo>[</mo> <mi>i</mi> <mo>.</mo> <mfrac> <mi>m</mi> <mi>k</mi> </mfrac> <mo>]</mo> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>&Element;</mo> <mi>N</mi> </mrow> </math>
Selecting a certain interval to extract a fixed number of eccentric moment elements capable of reflecting the overall profile characteristics of the object to construct a group of new vectors, and defining the vectors as optimized vectors of the eccentric moment; unifying the eccentric moment vectors of different targets into the same k-dimensional vector by the above formula, wherein
Figure BSA00000301355000051
Value of (a) is less than and closest to
Figure BSA00000301355000052
An integer of (d);
to make elements in each dimension in the eccentric moment vector comparable, an eccentric moment normalization vector is defined:
D″(d1,d2,d3…dk)T
<math> <mrow> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>&Element;</mo> <mi>N</mi> </mrow> </math>
in the formula: d' (i) represents the ratio of the feature of the eccentricity of the ith dimension of the target to the sum of the eccentricities of the dimensions;
calculating to obtain three basic characteristics of a target eccentric moment vector mean value, an eccentric moment vector dispersion degree and a maximum and minimum eccentric moment ratio;
the mean value of the eccentricity vectors can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mover> <mi>D</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>K</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>K</mi> </msubsup> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </math>
the dispersion of the eccentricity vector can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>K</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>K</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>D</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
the ratio of the maximum minimum eccentric moments can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>3</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mi>MAX</mi> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>MIN</mi> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
in the formula: MAX (D '(i)), MIN (D' (i)) represent the maximum and minimum element values, respectively, in the normalized vector of eccentricity, in M1,M2,M3As the final expression of the morphological feature of interest.
The night vehicle motion tracking in the technical scheme comprises the following steps:
1. expression of characteristics
Let M × N be a segmented binary image, and f (i, j) of pixel points in any foreground target R is 1, then the barycentric C coordinate (x) of Rc,yc) Can be defined as:
<math> <mrow> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>10</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>01</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> </mrow> </math>
in the formula: n is the number of pixels in R, for the sake of simplifying the problem, the invention adopts the gravity center point of the target to represent the target to realize the motion tracking under the image coordinate system, and simultaneously, in order to improve the accuracy of target matching, the area and the perimeter of the target are utilized to form the compactness characteristic to constrain the shape characteristic. The compactness S of R is defined as:
S = A L 2
in the formula: a is the area of R, namely the number n of pixel points in R, and L is the perimeter of R, namely the boundary point of the R region; describing the state of the target by using the gravity center, the speed, the compactness and the compactness change of the moving target, wherein the state feature vector of the target at the k moment can be expressed as:
<math> <mrow> <msub> <mi>X</mi> <mi>k</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>V</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>,</mo> <mo>&dtri;</mo> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
2. parameter initialization
After the target is stably present in the detection area, the gravity center positions of the first two observation moments are used for determining the speed of the target, namely:
Vx,0=xc,1-xc,0
Vy,0=yc,1-yc,0
wherein, Vx,0,Vy,0The initial speeds in the x and y directions respectively, and the coordinates of the first two observation time points of the target are respectively C0=(xc,0,yc,0),C1=(xc,1,yc,1) Meanwhile, due to the relative stability of the motion, the compactness of the target in two continuous observation moments cannot be greatly changed, so that the initial compactness change value is determined as follows:
<math> <mrow> <mo>&dtri;</mo> <msub> <mi>S</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math>
3. state estimation
Matching each segmented foreground target in the current observation time with a marked tracked target, selecting the best matching according to the matching minimum distance criterion by using the state characteristic observation value of the current target and the estimation value of the state characteristic of all the tracked targets at the previous observation time, and finding the target with the minimum matching distance as the tracked target, wherein the tracked target state estimation equation is as follows:
<math> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>&times;</mo> <mi>&Delta;t</mi> <mo>+</mo> <mi>&omega;</mi> </mrow> </math>
<math> <mrow> <msubsup> <mi>y</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>y</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>&times;</mo> <mi>&Delta;t</mi> <mo>+</mo> <mi>&omega;</mi> </mrow> </math>
<math> <mrow> <msubsup> <mi>S</mi> <mi>t</mi> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mo>&dtri;</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mi>&xi;</mi> </mrow> </math>
in the formula: at is the interval of adjacent observation times,
Figure BSA00000301355000067
for the estimated value of barycentric coordinate of the Lth tracked target at the t-th moment,
Figure BSA00000301355000068
for the barycentric coordinates at the t-1 th time of the Lth tracked target,
Figure BSA00000301355000069
predicting the speed in the x and y directions of the next moment after the matching of the t-1 moment is established,for the tightness estimate at time t of the tracked target,
Figure BSA000003013550000611
for the closeness at time t-1 of the tracked object,
Figure BSA000003013550000612
and the predicted change value of the compactness at the next moment at the t-1 th observation moment is omega, and xi are estimation errors.
4. Feature matching and updating
For the target successfully matched, calculating the matching error of the estimated value and the observed value, and calculating the speed of the target at the t +1 moment at the t momentAnd compactness variation
Figure BSA000003013550000614
Updating:
<math> <mrow> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&alpha;</mi> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&alpha;</mi> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>&dtri;</mo> <msubsup> <mi>S</mi> <mi>t</mi> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&beta;</mi> <mo>&dtri;</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&beta;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>S</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
in the formula:
Figure BSA000003013550000618
the x and y direction velocity values of the object successfully matched with the Lth tracked target at the current observation time,alpha and beta are constants from 0 to 1 respectively for the compactness of the object successfully matched with the Lth tracked target at the current observation time.
The technical scheme includes that the night vehicle traffic parameter extraction method comprises the following steps: according to a projection relation model of the image coordinates and the world coordinates, effective collection of vehicle running speed parameters is achieved by adopting a two-dimensional reconstruction algorithm based on black box calibration;
1. camera calibration
The projection relationship model relationship of image coordinates and world coordinates can be expressed as:
sm=pM
in the formula: s is a scale factor other than 0, M is a three-dimensional world homogeneous coordinate, and M ═ X Y Z1]TM is the homogeneous coordinate of the two-dimensional image, and m is [ u v 1 ]]TP is a mapping transformation matrix of three times four;
the black box calibration method only needs to solve a mapping transformation matrix p from three dimensions to two dimensions, and if a model plane is located on a plane of a world coordinate system Z equal to 0, the formula sm equal to pM is changed into the following form:
s u v 1 = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 X Y 1
then:
u = p 1 X + p 2 Y + p 3 p 7 X + p 8 Y + p 9
v = p 4 X + p 5 Y + p 6 p 7 X + p 8 Y + p 9
changing the above equation to:
p1X+p2Y+p3-uXp7-uYp8=up9
p4X+p5Y+p6-vXp7-vYp8=vp9
since the mapping transformation matrix p is multiplied by an arbitrary constant other than 0, the relationship between the world coordinates and the image coordinates is not affected, and therefore p is not assumed9When 1, there are n (n is more than or equal to 4) pairs (X)i,Yi),(ui,vi) The corresponding points are obtained as 2n linear equations with respect to the other elements of the p matrix, which are expressed in matrix form as follows:
X 1 Y 1 1 0 0 0 - u 1 X 1 - u 1 Y 1 0 0 0 X 1 Y 1 1 - v 1 X 1 - v 1 Y 1 . . . . . . . . . . . . . . . . . . . . . . . . X n Y n 1 0 0 0 - u n X n - u n Y n 0 0 0 X n Y n 1 - v n X n - v n Y n p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 = u 1 v 1 . . . u n v n
the above equation is written as: AP ═ U, where a: 2n × 8, P: 8 × 1, U: 2n × 1, with at least four pairs (X)i,Yi),(ui,vi) The corresponding points can be solved by adopting a least square method: p ═ ATA)-1ATU。
2. Obtaining the position and the actual speed of the vehicle through two-dimensional reconstruction
The two-dimensional reconstruction is to solve the position of the target object in the actual scene under the condition of the known image position of the target object, namely the inverse process of camera calibration; calibrating the camera by using a black box calibration method to obtain a mapping transformation matrix p, and further reconstructing the actual position coordinates of the points on the plane where Z is 0 to obtain:
X i = ( p 6 p 8 - p 5 p 9 ) u i + ( p 2 p 9 - p 3 p 8 ) v i + ( p 3 p 5 - p 2 p 6 ) ( p 5 p 7 - p 4 p 8 ) u i + ( p 1 p 8 - p 2 p 7 ) v i + ( p 2 p 4 - p 1 p 5 )
Y i = ( p 4 p 9 - p 6 p 7 ) u i + ( p 3 p 7 - p 1 p 9 ) v i + ( p 1 p 6 - p 3 p 4 ) ( p 5 p 7 - p 4 p 8 ) u i + ( p 1 p 8 - p 2 p 7 ) v i + ( p 2 p 4 - p 1 p 5 )
determining the coordinates of the actual space from the image coordinates; obtaining the position of the vehicle under an image coordinate system through motion tracking, and obtaining the position of the vehicle under an actual space coordinate system through two-dimensional reconstruction; and dividing the positions of the front sampling moment and the rear sampling moment by the sampling interval to obtain the actual speed of the vehicle.
Compared with the prior art, the invention has the beneficial effects that:
1. the night vehicle video detection method based on illumination visibility identification is based on illumination visibility evaluation, adopts a dual-mode night vehicle detection mode, overcomes the defect of poor adaptability of the conventional night vehicle video detection technology to illumination conditions, and meets the requirement of effectively acquiring motor vehicle traffic flow parameters under the conditions of street lamps at night and no street lamps;
2. the night vehicle video detection method based on illumination visibility identification improves the function of an all-weather vehicle information acquisition system and provides technical support for intelligent and automatic management and control of an urban traffic system.
Drawings
The invention is further described with reference to the accompanying drawings in which:
FIG. 1-a is a video view of a traffic scene showing vehicles traveling on a streetlight road at night;
FIG. 1-b is a video view of a traffic scene showing a vehicle traveling on a street without a road lamp at night;
FIG. 2 is a block diagram illustrating a technical process of a night vehicle video detection method based on illumination visibility recognition according to the present invention;
FIG. 3 is a block diagram illustrating a background initialization technique for a night vehicle video detection method based on illumination visibility recognition according to the present invention;
FIG. 4 is a normalized vector graph of eccentricity for a car using the night vehicle video detection method based on illumination visibility recognition according to the present invention.
Detailed Description
The invention is described in detail below with reference to the attached drawing figures:
referring to fig. 1, a road scene at night usually includes both street lamps and no street lamps. Aiming at the defect that the current night vehicle video detection technology has poor adaptability to illumination conditions, the method utilizes a mode identification method to automatically divide a scene into two illumination modes of a street lamp and a non-street lamp according to the illumination visibility of a night scene of a road to be detected, and different detection schemes are respectively adopted under the two illumination modes to extract traffic parameters such as flow, type, running speed and the like of night motor vehicle flow. Under the illumination mode with the street lamp, the brightness and the contrast of the vehicle are improved to obtain the form information of the vehicle, and the effective detection of the vehicle is realized by utilizing the motion tracking; under the lighting mode of the street lamp, the non-headlight light source is removed by using an image morphology algorithm, and the pairing of the headlights of the vehicle and the tracking of the vehicle are realized according to the relative distance between the headlights and some inherent properties. The night vehicle video detection method based on illumination visibility identification comprises the following specific contents:
1. night traffic scene video image acquisition
And the camera lens is arranged above the driveway with the street lamp and the driveway without the street lamp respectively, is positioned at the position vertical to the road surface as far as possible, and is 8-12 meters away from the road surface. The camera lens collects video images of traffic scenes with or without street lamps, compresses collected video signals into an MPEG format, transmits the MPEG format to a computer and stores the MPEG format.
2. Night light pattern recognition
In order to effectively evaluate the illumination condition in the road traffic scene of the video image, a background image in the traffic scene of the video image needs to be extracted, and the illumination mode division is carried out on the scene to be detected according to the brightness information of the background image. The method realizes the acquisition of the background image in the traffic scene of the video image by using the background extraction algorithm of cluster identification. Because the illumination in the video image changes in the long-time detection process, the invention also provides a background updating improvement method based on the moving object by combining the foreground moving target area information, thereby ensuring the real-time and accurate acquisition of the illumination information in the scene to be detected.
In order to evaluate and identify the illumination visibility of the background image extracted in each traffic scene, an effective illumination evaluation index parameter needs to be selected. The invention adopts the luminance mean value I of the HS I color space in the detection region of interest (generally, traffic lane) of 1/3(R + G + B) and the region color luminance standard deviation, wherein: r, G and B respectively represent the colors of the three channels of red, green and blue in the image; as the illumination visibility evaluation feature index, it is determined as an illumination visibility expression feature. A plurality of traffic scene background images collected in advance are used as training samples, an illumination visibility classification model based on an SVM (support vector machine) is constructed, and night illumination mode recognition based on illumination visibility evaluation is achieved.
3. Streetlight-free mode vehicle detection scheme
When the street lamp-free mode is judged to be the street lamp-free mode at night, the method extracts a foreground image (the foreground image is a headlamp light source instead of a vehicle body) by a background difference method, and then removes an interference light source by image binarization and proper mathematical morphology processing. On the basis, a matching function is constructed by using the acquired central coordinate value and area of the headlamp area to determine the belonging vehicle, so that the vehicle detection under the lighting mode without the headlamp is realized, and the vehicle type is identified according to the relative distance between the headlamps. And meanwhile, the Kalman filtering algorithm is utilized to carry out motion tracking on the matched vehicle head lamp, the motion state of the vehicle is obtained, and effective collection of the vehicle running speed parameter is realized according to a projection relation model of the image coordinate and the world coordinate.
4. Streetlight mode vehicle detection scheme
When the street lamp illumination mode is judged to exist at night, the foreground image (the foreground image at the moment is a vehicle body) is extracted by the background difference method, the head lamp is not particularly remarkable, but the detection of the moving target by the head lamp and the light beam projected by the head lamp is still greatly influenced. The invention estimates the light beam scattering intensity of the vehicle head lamp according to the Retinex algorithm, and eliminates the light interference of the vehicle head lamp by adopting threshold processing, thereby improving the brightness and the contrast of the vehicle. On the basis that the background model extracts the foreground moving area, the shape information of the vehicle at night is obtained by image segmentation, and the effective recognition of the vehicle in the street lamp mode is realized. And simultaneously, tracking the gravity center of the vehicle contour by utilizing a kalman filtering algorithm to obtain the motion state of the vehicle, and effectively acquiring the vehicle running speed parameter according to a projection relation model of the image coordinate and the world coordinate.
Referring to fig. 2, the night vehicle video detection method based on illumination visibility recognition includes the following steps: the method comprises the steps of vehicle video image acquisition at night, night illumination mode identification, vehicle detection in a night street lamp-free mode or a night street lamp-available mode, vehicle motion tracking and vehicle motion parameter extraction.
1. Night traffic scene video image acquisition
Video image acquisition is respectively carried out on traffic scenes with street lamps and traffic scenes without street lamps, and the acquisition mode is that a camera lens is arranged above a lane and is positioned at a vertical position as far as possible and is 8-12 meters away from the road surface. And compressing the collected video signals into an MPEG format, transmitting the MPEG format to a computer and storing the MPEG format.
2. Night light pattern recognition
1) Background extraction based on cluster recognition
And for the collected video image sequence, acquiring a background image in a scene by using a background extraction algorithm of cluster identification. The method utilizes the non-overlapping stable sequence on the searched image pixel time sequence to construct the background subset through the pixel value variation degree constraint, thereby realizing the effective extraction of the background image.
The candidate background sequence may be generated from three cases: the actual background, the stopped foreground objects and the large objects move slowly. Therefore, acquiring the initial background requires removing the stationary sequence components generated by stopping the foreground object or slowly moving large objects. In general, for practical applications, the length of the video training sequence used for generating the background is not very long, the probability of the occurrence of a plurality of stationary sequences caused by stopping a foreground object or slowly moving a large object is very low, the probability of the occurrence of a plurality of stationary sequences caused by an actual background is very high, and the gray values of different background sub-sequences are all around a stable value. Therefore, the stationary sequences searched on the training sequences can be divided into different subclasses according to the similarity of the background subsequences, and the actual background is generated by the subclass containing the part with the most number of stationary sequences.
Since the gray values of the background sequence are maintained around a stable value for a long time, the variation range of the actual background value in the candidate background value set does not exceed the maximum allowable variation δmax. By candidate background sets<s1,…,sk>Is centered at δmaxA circular area is constructed for the radius, and the number of element points falling within the circular area is calculated. In the process of calculating the number of data points of each region, the data points of each region are calculated<s1,…,sk>The middle elements are sorted from small to large<s1,…,sk>→<s′1,…,s′k>And then in turn with<s′1,…,s′k>Set each element as center, with δmaxBuilding a circular area for the radius may speed up the computation of the subset data points. Thereby selecting as the background subset the set of regions having the most data element contributions. If the number of elements of the background subset is more than two, then the initial background value may be considered as the closest data point to the center of the background subset; if the candidate background sets<s1,…,sk>The number of elements is less than two, or the distance between every two data points in the set is larger than deltamaxWhen this is the case, the following method may be employed: when candidate background sets<s1,…,sk>When the number of the middle data is equal to zero, the data may be from the peripheryThe short-time continuous change of factors such as environment, weather and the like can be caused, and the background initialization can be realized by adopting a time sequence median method in the case; when in use<s1,…,sk>When the number of elements is 1, it is possible that the background value is true on the image training sequence, and at this time, a median method may be used to implement background initialization, and at the same time, it is also possible that all methods cannot acquire the background in this case, and only a method of reconstructing the image training sequence may be adopted because the training sequence is completely occupied by foreground stationary objects. In addition, if the candidate background sets<s1,…,sk>The distance between any two element points is larger than deltamaxWhen it is needed, it can be selected<l1,…,lk>And taking the subsequence with the highest mid-smoothness as a background sequence, and determining the median of the background sequence as an initial background value. The smoothness of the subsequence may be expressed as:j is more than or equal to 1 and less than or equal to k. Wherein,
Figure BSA00000301355000112
-length of jth sub-sequence;
Figure BSA00000301355000113
-the variance of the jth subsequence.
Specific background initialization technique flow is shown in fig. 3.
The background value of the pixel is not constant due to the influence of weather and light change. Therefore, to achieve long-term effective background maintenance, the parameters of the background model need to be continuously corrected. Conventional background update algorithms only consider data information in the temporal dimension of a single pixel, but do not consider spatial information between neighboring pixels. Therefore, for the case where a moving object enters the detection region and stops, the stopped foreground object will become a part of the background after a period of background update. In order to solve the problems, the background updating improvement method based on the object layer is constructed by combining information based on the foreground target area on the basis of considering the state value of the pixel time sequence. The real-time and accurate acquisition of illumination information in a detection scene is ensured.
According to a background image obtained by a Gaussian mixture model, the effective extraction of a foreground image can be realized by combining background difference:
F(x,y,t)=I(x,y,t)-B(x,y,t-1)(1)
in the formula: b denotes a background image, and F denotes a foreground image. After the foreground image is extracted, the object area information of the moving target in the foreground image can be acquired by utilizing image processing links such as area segmentation, motion tracking and the like. Thus, a binary matrix with the same size as the image can be set as the foreground object region determination matrix:
<math> <mrow> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> </mtd> <mtd> <mi>if</mi> </mtd> <mtd> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>ForegroundObject</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> </mtd> <mtd> <mi>if</mi> </mtd> <mtd> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>&NotElement;</mo> <mi>ForegroundObject</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, foregorund object represents foreground object area, if image pixel point is in these object areas, M (x, y, t) is 1, it indicates that the point is occupied by foreground motion area at present, and no background update is needed; otherwise, when M (x, y, t) ═ 0, the background is updated automatically according to the background update algorithm of the conventional pixel layer.
2) Eigenvalue selection
The illumination uniformity is expressed as the distribution of illumination in a specific area, and generally, the brightness uniformity of the area in the daytime is higher, the brightness uniformity of the area with a street lamp is relatively lower, and the brightness uniformity of the area without the street lamp is lowest. The uniformity of the brightness can be embodied by using the variance of the regional brightness, namely, the larger the variance is, the lower the uniformity is, so that the uniformity of illumination can be conveniently evaluated, and a daytime and nighttime mode and a nighttime street lamp mode are obtained. The invention selects the brightness I of HSI color space as 1/3(R + G + B) to indirectly express the illumination visibility, wherein: r, G and B respectively represent the colors of red, green and blue channels in the image, in order to ensure the accuracy of feature extraction, the invention marks a rectangle from the lane position in the background image to determine as the region of interest, and the standard deviation and the mean value in the region of interest are used as the characteristic indexes of illumination visibility evaluation and are determined as the illumination visibility expression characteristic.
3) Night illumination mode classification model establishment
Collecting video sequences of traffic scenes in different periods (street lamps at night and street lamps at night) in two modes of street lamps at night and no street lamps in advance, extracting standard deviation and mean value of brightness I of HSI color space in a selected region of interest respectively, performing off-line training on collected illumination information index data samples by using an SVM (support vector machine), and constructing an SVM scene recognition model based on illumination visibility; inputting illumination information index data of a traffic scene to be detected into a scene recognition model obtained by off-line learning and based on illumination visibility, recognizing a scene illumination visible mode on line, and dividing the scene illumination visible mode into a street lamp illumination mode or a street lamp illumination-free mode; therefore, the night vehicle detection is carried out by adopting different detection schemes under different modes.
3. Vehicle detection in streetlight mode
1) Headlight extraction by background difference method and binary processing
When no-road-lamp mode is identified, only the headlights and the tail lamps (or other decorative lamps) of the vehicles are visible on the road, and the headlights of the vehicles become the most remarkable features, so that the headlights can be kept as much as possible, the influence of other light sources is eliminated, and the running vehicles are detected by using the characteristics of the headlights. The invention carries out image segmentation by a background difference method, selects a proper threshold value to carry out binarization processing, and extracts to obtain the headlight.
2) Detection precision is improved by applying morphological filtering and mathematical morphology processing
And the binary image is processed by using morphological filtering, so that a plurality of small noise points are removed, and the influence is reduced. In order to maximize the accurate extraction of the headlights, the headlights are mathematically morphologically processed to eliminate effects. Because even if a proper threshold value is selected, a plurality of scattered points are still in disorder, which can affect the detection precision, and decorative lamps under the headlights can also affect the detection effect, even the situation of multi-detection is formed, which needs to perform necessary morphological processing on the image before detection. However, as a result of this process, the area of the headlight may be larger than the actual area, and there is a certain constraint on the matching algorithm in the following matching of the headlight. Generally, the decorative lamp is small compared with the headlight, the decorative lamp is removed by a method of removing the headlight outside the region of interest, and due to the fixed divergence of the headlight, a plurality of small protruding parts are arranged around the extracted image, so that certain difficulty and influence are brought to the region mark, and the region mark can be eliminated through the on operation.
3) Method for obtaining correct moving target based on 8-connected region and region growing algorithm based on template scanning and improving matching tracking precision
In order to obtain a correct moving target and improve the matching and tracking precision, the method is based on an 8-connected region method, and realizes multi-region segmentation of the foreground image by adopting a region growing algorithm based on template scanning. The algorithm comprises the following steps:
(1) establishing an n multiplied by n scanning template, dividing a foreground image into p multiplied by q n multiplied by n sub-regions, and establishing a p multiplied by q foreground region marking matrix M;
(2) scanning p multiplied by q sub-areas on the image one by one according to the sequence from left to right and from top to bottom, and when the number of foreground points in the ith row and the j column area is more than that of the foreground points in the ith row and the j column areaWhen M (i, j) is 1, otherwise, M (i, j) is 0. Searching a connected region aiming at the marking matrix M from left to right and from top to bottom;
(3) when M (i, j) is 0, searching the next point in sequence; when M (i, j) ═ 1, the M (i, j) is taken as a seed;
(4) for the seed point M (i, j), modifying M (i, j) to 0, and searching its connected region. The rules are as follows: and sequentially judging the values of eight points in the neighborhood. And (4) if the upper left point is 1, taking the upper left point as a seed point, executing step (4), otherwise, judging the next point (upper) and so on until the neighborhood has no connected point, taking the connected region as a target region, and executing step (3).
Through the recursive algorithm, the foreground moving region in the whole image can be obtained, and the marking of the circumscribed rectangle of the moving target region is realized through the upper left point and the lower right point of the obtained region. Namely, each car lamp frame is combined by the minimum external rectangular frame, the coordinate value and the area of the central point of the car lamp rectangle at the moment are recorded, and based on the parameters, the extracted car lamp is matched by the method, so that the car detection is realized.
4) Vehicle detection in a no-road-light mode using vehicle headlamp matching functions
Since headlights are the most prominent feature of a vehicle in a night scene, if the camera takes a longitudinal shot, the headlights of the same vehicle (the method of the present invention assumes that each vehicle contains two headlights) should be approximately on a horizontal line and the area of the headlights is similar in size. According to the characteristics, a matching function is constructed by the correlation of three factors of the acquired longitudinal coordinate distance, the abscissa distance and the area of the headlights in the detection area of the vehicle headlight area, so that the vehicle can be effectively detected in the no-road-light mode at night. The vehicle headlight matching function is as follows:
for each headlight in the detection area, the best match is selected according to the matching distance criterion, and the headlight with the smallest matching function is found to be determined as the same vehicle. The specific matching function is as follows:
|Ai-Aj|≤ε
(3)
|Yi-Yj|≤φ
wherein A is the area of the headlight region, Y is the coordinate of the camera in the vertical direction, and X is the coordinate of the camera in the horizontal direction。ε,φ,γ is a constraint threshold set in advance. And (3) setting a reasonable classification threshold value according to off-line vehicle characteristic collection by using the matched double-head lamp horizontal distance of each vehicle and the duty ratio (area/perimeter ^2) average value of the head lamp area as a classification characteristic, carrying out on-line judgment on the vehicle type, and dividing the vehicle into a large vehicle, a medium vehicle and a small vehicle.
5) Detecting motion tracking of a vehicle
The vehicle detection under the no-street-lamp mode is completed through the matching of the vehicle head lamps, and meanwhile, the matched vehicle head lamps are subjected to motion tracking by utilizing a kalman filtering algorithm, so that the motion state of the vehicle is obtained, and the continuous, quick and stable vehicle motion tracking is realized. The technical scheme comprises the following parts:
(1) expression of characteristics
Let M × N be a segmented binary image, and f (i, j) of a pixel point in any foreground target R is 1, then the (p, q) th order moment of R can be expressed as:
<math> <mrow> <msub> <mi>m</mi> <mi>pq</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>R</mi> </mrow> <mi>M</mi> </munderover> <mover> <mi>&Sigma;</mi> <mi>N</mi> </mover> <msup> <mi>i</mi> <mi>p</mi> </msup> <msup> <mi>j</mi> <mi>q</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (4), m00Number of points representing R, m10,m01The central moment is represented. C coordinate of center of gravity of R(xc,yc) Can be defined as:
<math> <mrow> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>10</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>01</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (5), n is the number of pixels in R. In order to simplify the problem, the gravity center point of the target is adopted to represent the target to realize the motion tracking under the image coordinate system. Meanwhile, in order to improve the accuracy of target matching, the compactness characteristic is formed by utilizing the area and the circumference of the target, and the shape characteristic of the target is restrained. The compactness S of R is defined as:
S = A L 2 - - - ( 6 )
in formula (6): a is the area of R, namely the number n of pixel points in R, and L is the perimeter of R, namely the boundary point of the R region.
Describing the state of the object by using the gravity center, the speed, the compactness and the compactness change of the moving target, wherein the state feature vector of the target k moment can be expressed as:
<math> <mrow> <msub> <mi>X</mi> <mi>k</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>V</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>,</mo> <mo>&dtri;</mo> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
(2) motion parameter initialization
The stability of the motion tracking process is directly dependent on the initialization of the motion parameters, and therefore the quality of the selection of the initial values of the parameters plays an important role. The center of gravity and the compactness of the segmented target can be directly obtained through calculation, and the change values of the speed and the compactness need to be initially defined. Usually, the target will travel within the tracking area at a range of velocities, and the limit of the acceleration determines the range of variation of the moving position of the target in two adjacent observation times. Therefore, after the target appears stably in the detection area, the gravity center positions of the first two observation moments can be used for determining the speed of the moving target, namely:
Vx,0=xc,1-xc,0
(8)
Vy,0=yc,1-yc,0
wherein: vx,0、Vy,0The initial speeds in the x and y directions are respectively, and the coordinates of the first two observation time points of the target are respectively C0=(xc,0,yc,0),C1=(xc,1,yc,1). Meanwhile, due to the relative stability of the motion, the compactness of the target in two consecutive observation moments cannot be greatly changed, so that the initial compactness change value can be assumed as:
<math> <mrow> <mo>&dtri;</mo> <msub> <mi>S</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
(3) state estimation
The motion tracking of the target in the continuous video image sequence is realized, namely the target characteristic state characteristics in continuous observation time can be continuously matched. For each segmented foreground target in the current observation time to be matched with a tracked target (marked), the best matching can be selected according to the matching minimum distance criterion by using the state feature observation value of the current target and the estimation value of the state feature of all the tracked targets at the previous observation time, and the target with the minimum matching distance is found to be the tracked target. The characteristic estimation value of the tracked target at the previous observation time can be obtained by utilizing the prediction of the state of the previous observation time. The matching search range of the target can be narrowed through state prediction, and the processing speed is improved to a certain extent. The tracked target state estimation equation is as follows:
<math> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>&times;</mo> <mi>&Delta;t</mi> <mo>+</mo> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>y</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>y</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>&times;</mo> <mi>&Delta;t</mi> <mo>+</mo> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>S</mi> <mi>t</mi> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mo>&dtri;</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mi>&xi;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
in formulae (10), (11), (12): at is the interval of adjacent observation times,
Figure BSA00000301355000154
for the estimated value of barycentric coordinate of the Lth tracked target at the t-th moment,
Figure BSA00000301355000155
for the barycentric coordinates at the t-1 th time of the Lth tracked target,
Figure BSA00000301355000156
predicting the speed in the x and y directions of the next moment after the matching of the t-1 moment is established,
Figure BSA00000301355000157
for the tightness estimate at time t of the tracked target,
Figure BSA00000301355000158
for the closeness at time t-1 of the tracked object,
Figure BSA00000301355000159
and the predicted change value of the compactness at the next moment at the t-1 th observation moment is omega, and xi are estimation errors.
(4) Feature matching and updating
In order to enhance the matching precision in the target area tracking process, a matching strategy of combining the motion characteristics and the morphological characteristics of an object is adopted. The gravity center position and the feature component of the compactness state are respectively matched according to the feature estimation of the current observation time target area and the previous observation time of each tracked target. According to the minimum distance matching principle, the matching cost of the state feature component estimated value of the tracked target at the time t-1 and the state features of all the segmented unmatched foreground targets at the time t can be respectively calculated, and if the calculated matching cost of a certain feature component is minimum and is smaller than a certain set threshold value, the matching success of the corresponding features of the marked target at the time t-1 is indicated. When the two characteristics are matched successfully, the target is considered to be tracked correctly; otherwise, when only one or no feature matching is successful, it may be caused by occlusion, and the occlusion processing analysis is required.
For the target successfully matched, calculating the matching error of the estimated value and the observed value, and calculating the speed of the target at the t +1 moment at the t moment
Figure BSA000003013550001510
And compactness variation
Figure BSA000003013550001511
Updating:
<math> <mrow> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&alpha;</mi> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&alpha;</mi> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>&dtri;</mo> <msubsup> <mi>S</mi> <mi>t</mi> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&beta;</mi> <mo>&dtri;</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&beta;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>S</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow> </math>
in formulae (13), (14), (15):
Figure BSA000003013550001515
the x and y direction velocity values of the object successfully matched with the Lth tracked target at the current observation time,
Figure BSA000003013550001516
alpha and beta are constants from 0 to 1 respectively for the compactness of the object successfully matched with the Lth tracked target at the current observation time. By updating the parameters, the algorithm can perform autoregressive in the next continuous frames, and the tracking of the moving target is realized by iteration.
6) Vehicle operating parameter acquisition
And according to the projection relation model of the image coordinate and the world coordinate, effectively acquiring the vehicle running speed parameter by adopting a two-dimensional reconstruction algorithm based on black box calibration.
(1) Camera calibration
The projection relationship model relationship of image coordinates and world coordinates can be expressed as:
sm=pM (16)
in the formula: s is a scaling factor other than 0; m is threeWorld homogeneous coordinate, M ═ X Y Z1]T(ii) a m is a homogeneous coordinate of the two-dimensional image, and m is [ u v 1 ]]T(ii) a p is a three by four mapping transformation matrix.
The existing camera calibration method mostly needs to completely calibrate the camera, the method is complex and time-consuming, the black box calibration method only needs to solve a three-dimensional to two-dimensional mapping transformation matrix p, and does not need to specifically calibrate the internal and external parameters of the camera, so that the calculation is greatly simplified. Without loss of generality, assuming that the model plane is located on a plane of the world coordinate system Z being 0, equation (15) becomes the following form:
s u v 1 = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 X Y 1 - - - ( 17 )
then:
u = p 1 X + p 2 Y + p 3 p 7 X + p 8 Y + p 9 v = p 4 X + p 5 Y + p 6 p 7 X + p 8 Y + p 9 - - - ( 18 )
the transformation formula (18) is as follows:
p1X+p2Y+p3-uXp7-uYp8=up9
(19)
p4X+p5Y+p6-vXp7-vYp8=vp9
since the mapping transformation matrix p is multiplied by an arbitrary constant other than 0, the relationship between the world coordinates and the image coordinates is not affected, and therefore p is not assumed9When 1, there are n (n is more than or equal to 4) pairs (X)i,Yi),(ui,vi) The corresponding points can obtain 2n linear equations about other elements of the p matrix, which are expressed in the form of a matrix as follows:
X 1 Y 1 1 0 0 0 - u 1 X 1 - u 1 Y 1 0 0 0 X 1 Y 1 1 - v 1 X 1 - v 1 Y 1 . . . . . . . . . . . . . . . . . . . . . . . . X n Y n 1 0 0 0 - u n X n - u n Y n 0 0 0 X n Y n 1 - v n X n - v n Y n p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 = u 1 v 1 . . . u n v n - - - ( 20 )
writing equation (20) as: AP ═ U, where a: 2n × 8, P: the concentration of the mixed solution is 8 multiplied by 1,u: 2n × 1, with at least four pairs (X)i,Yi),(ui,vi) The corresponding points can be solved by adopting a least square method: p ═ ATA)-1ATU。
(2) Obtaining the position and the actual speed of the vehicle through two-dimensional reconstruction
The two-dimensional reconstruction is to solve the position of the object in the actual scene under the condition of knowing the image position of the object, namely the inverse process of camera calibration. And calibrating the camera by using a black box calibration method to obtain a mapping transformation matrix p, and further reconstructing the actual position coordinates of the points on the plane where Z is 0. I.e. in formula (18) p1~p9And (u)i,vi) Knowing, solving for and (u)i,vi) Corresponding to (X)i,Yi) The coordinates of (a). Solving equation set (19) by a matrix algebra method to obtain:
X i = ( p 6 p 8 - p 5 p 9 ) u i + ( p 2 p 9 - p 3 p 8 ) v i + ( p 3 p 5 - p 2 p 6 ) ( p 5 p 7 - p 4 p 8 ) u i + ( p 1 p 8 - p 2 p 7 ) v i + ( p 2 p 4 - p 1 p 5 ) Y i = ( p 4 p 9 - p 6 p 7 ) u i + ( p 3 p 7 - p 1 p 9 ) v i + ( p 1 p 6 - p 3 p 4 ) ( p 5 p 7 - p 4 p 8 ) u i + ( p 1 p 8 - p 2 p 7 ) v i + ( p 2 p 4 - p 1 p 5 ) - - - ( 21 )
the coordinates of the actual space can thus be determined from the image coordinates. The position of the vehicle under the image coordinate system can be obtained through motion tracking, and the position of the vehicle under the actual space coordinate system can be obtained through two-dimensional reconstruction. The actual speed of the vehicle can be obtained by dividing the positions of the front and rear sampling instants by the sampling interval.
4. Vehicle detection in streetlight mode
1) Eliminating influencing factors
When the street lamp mode is judged to exist, the illumination is sufficient, the headlights are not particularly remarkable, the invention extracts the vehicle body outline as the characteristic value, but the headlights and the projected light beams thereof still have great influence on the detection of the movement purpose, the projected light beams on the road surface also have remarkable influence on the result, the Retinex algorithm is utilized to estimate the reflection intensity of the light beams, and then the influence is eliminated.
2) Vehicle detection
Similar to the detection scheme of the vehicle without the road lamp, in order to meet the requirement of real-time processing, the multi-region segmentation of the foreground image is realized by adopting a region growing algorithm based on template scanning. And the Kalman filtering-based model is adopted to realize the detection of the moving vehicle in the streetlight mode. And (3) assuming that the image coordinates and the world coordinates approximately conform to a homogeneous linear transformation relation, and directly processing under an image coordinate system. And predicting the state of the moving object in the current frame according to the historical state information of the moving object, and matching the candidate region according to the predicted state. The matching features adopt the features of target gravity center and area. And matching each foreground target in the candidate area with the tracked target, selecting the best match according to a matching distance criterion, and finding the target with the minimum matching distance, namely the tracked target.
3) Vehicle type judgment based on matrix characteristics
The invention provides a target feature expression method based on a moment feature, so that the judgment of the vehicle type in a streetlight mode is realized. The moment feature is a regional internal transformation method for morphological analysis, describes the global feature of regional target morphology, and provides a great deal of geometric feature information about the region from different angles. The basic moment function can be used for converting a characteristic moment with translational, telescopic and rotational invariance, and the characteristic moment is not influenced by the position, the size and the angle change of the regional target. In addition, the moment features are obtained by statistically calculating the region point set in the target, so that the method has better noise resistance and can overcome the influence caused by factors such as the motion state of the target, environmental change and the like.
(1) Definition of eccentricity
Assuming that there is a spatially discretized image f (x, y) of size M × N, its pq order moment is defined as:
<math> <mrow> <msub> <mi>m</mi> <mi>pq</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mi>i</mi> <mi>p</mi> </msup> <msup> <mi>j</mi> <mi>q</mi> </msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>22</mn> <mo>)</mo> </mrow> </mrow> </math>
then, for the binary image after segmentation, the pixel value f (i, j) of any pixel point in the foreground target region R is 1, and the (p, q) th order moment in R has the following expression:
<math> <mrow> <msub> <mi>m</mi> <mi>pq</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>R</mi> </mrow> <mi>M</mi> </munderover> <mover> <mi>&Sigma;</mi> <mi>N</mi> </mover> <msup> <mi>i</mi> <mi>p</mi> </msup> <msup> <mi>j</mi> <mi>q</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>23</mn> <mo>)</mo> </mrow> </mrow> </math>
by the formula (22), m can be calculated respectively00、m10、m01Wherein m is00Number of pixels representing region R, i.e. area, m10,m01The central moment is represented. The C-coordinate (x) of the center of gravity of the region Rc,yc) Can be expressed as:
<math> <mrow> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>10</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>01</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>24</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (24), n is the number of pixels in the region R.
According to the connected characteristic of the foreground region, sequencing the region contour point set in a counterclockwise order and representing the region contour point set in a vector form: p (p)1,p2,p3…pm). Then each element P in the inventive vectoriThe distance from the region center of gravity C is defined as piEccentric moment d ofi. From this vector p (p)1,p2,p3…pm) The distance between each element and the center of gravity can form an eccentricity sequence, and the eccentricity sequence is defined as an eccentricity vector of the foreground target area:
D(d1,d2,d3…dm)T
(25)
<math> <mrow> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>dist</mi> <mrow> <mo>(</mo> <mi>C</mi> <mo>,</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>
(2) eccentricity moment vector normalization
Due to the different targets, their profile vector p (p)1,p2,p3…pm) The dimensions may differ, as may the dimensions of the corresponding eccentricity vectors for the target. Therefore, in order to compare different targets by using the characteristic of the eccentric moment vector, the dimension of the eccentric moment vector of each target is ensured to be uniform firstly. From the original toQuantity D (D)1,d2,d3…dm)TAccording to the rule of a formula (25), a certain interval is selected to extract a fixed number of eccentric moment elements capable of reflecting the overall profile characteristics of an object to construct a group of new vectors, and the vectors are defined as eccentric moment optimization vectors:
D′(d1,d2,d3…dm)T
(26)
<math> <mrow> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>D</mi> <mo>[</mo> <mi>i</mi> <mo>.</mo> <mfrac> <mi>m</mi> <mi>k</mi> </mfrac> <mo>]</mo> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>&Element;</mo> <mi>N</mi> </mrow> </math>
the eccentric moment vectors of different targets can be unified into the same k-dimensional vector by the equation (26). In the formula (26), the reaction mixture is,
Figure BSA00000301355000187
value of (a) is less than and closest to
Figure BSA00000301355000188
Is an integer of (1). Meanwhile, due to differences of shooting scenes or proportional changes generated by target motion, the numerical values of the eccentric moments of the same target deviate in the motion process of the object, and the target is easily identified by mistake. Therefore, the present invention further employs a normalization method to overcome this effect. For making each dimension element in the eccentric moment vectorElements are comparable, defining the normalized vector of eccentricity:
D″(d1,d2,d3…dk)T
<math> <mrow> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>&Element;</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>27</mn> <mo>)</mo> </mrow> </mrow> </math>
in equation (27), D "(i) represents a ratio of the feature of the eccentricity in the ith dimension of the target to the sum of the eccentricities in the dimensions. The normalization vector of the eccentric moment obtained by the method can overcome the influence caused by the target proportion change. FIG. 4 is a normalized vector curve of eccentricity for a car.
By the formulas (25) and (26), three basic characteristics such as a target eccentric moment vector characteristic mean value, an eccentric moment vector dispersion degree, a maximum-minimum eccentric moment ratio and the like can be calculated.
The mean value of the eccentricity vectors can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mover> <mi>D</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>K</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>K</mi> </msubsup> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>28</mn> <mo>)</mo> </mrow> </mrow> </math>
the dispersion of the eccentricity vector can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>K</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>K</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>D</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>29</mn> <mo>)</mo> </mrow> </mrow> </math>
the ratio of the maximum minimum eccentric moments can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>3</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mi>MAX</mi> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>MIN</mi> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>30</mn> <mo>)</mo> </mrow> </mrow> </math>
in equation (29), MAX (D "(i)) and MIN (D" (i)) represent the maximum element value and the minimum element value in the normalized vector of eccentricity, respectively. Calculated feature M based on eccentric moment vector1,M2,M3Has better stability in the motion process of the regional target, and has the characteristics of translation, expansion and contraction and unchanged rotation, and M is used1,M2,M3As the final expression of the morphological feature of interest.
The method comprises the steps of constructing a multi-class vehicle training sample library, extracting eccentric moment characteristics, constructing a vehicle type classification model based on an SVM (support vector machine), inputting vehicle characteristics of a traffic scene to be detected into a vehicle type identification model obtained by off-line learning, identifying vehicle types on line, and dividing the vehicle types into large-sized vehicles, medium-sized vehicles and small-sized vehicles.
4) The method is the same as a vehicle detection scheme without a road lamp, and realizes effective acquisition of vehicle running speed parameters on the basis of motion tracking according to a projection relation model of image coordinates and world coordinates. The specific implementation process is shown in step 6) in step 3 (vehicle detection in a no-road-lamp mode), that is, effective acquisition of vehicle running speed parameters is realized by adopting a two-dimensional reconstruction algorithm based on black box calibration according to a projection relation model of image coordinates and world coordinates, and details are not repeated here.

Claims (8)

1. A night vehicle video detection method based on illumination visibility identification is characterized by comprising the following steps:
1) night traffic scene video image acquisition
Installing camera lenses above the road with the street lamps and the road without the street lamps respectively, compressing the collected video images of the traffic scene into an MPEG format and transmitting the MPEG format to a computer for storage, wherein the camera lenses are 8-12 meters away from the road surface and are positioned at a position vertical to the road surface;
2) night light pattern recognition
Determining whether the mode is a night street lamp-free mode or a night street lamp mode;
3) carrying out vehicle detection in a night non-street lamp mode or in a night street lamp mode;
4) vehicle motion tracking at night
The matched vehicle head lamp is subjected to motion tracking by utilizing a kalman filtering algorithm, the motion state of the vehicle is obtained, and continuous and rapid vehicle motion tracking is realized;
5) night vehicle traffic parameter extraction
And extracting the vehicle running speed parameter by adopting a two-dimensional reconstruction algorithm based on black box calibration according to a projection relation model of the image coordinate and the world coordinate.
2. The night vehicle video detection method based on illumination visibility recognition according to claim 1, wherein the night illumination pattern recognition comprises the steps of:
1) background extraction based on cluster recognition
For an acquired video image sequence, acquiring a background image in a scene by using a background extraction algorithm of cluster identification, namely constructing a background subset by using a non-overlapping stable sequence on a searched image pixel time sequence and realizing the extraction of the background image through pixel value variation constraint;
2) eigenvalue selection
Selecting the brightness I of the HSI color space as (R + G + B)/3 to indirectly express the illumination visibility, marking a rectangular interested region from the lane position in the background image, and taking the standard deviation and the mean value of the interested region as the illumination visibility evaluation characteristic index, wherein: r, G and B respectively represent the red, green and blue color characteristic components of the pixel points in the image;
3) night illumination mode classification model establishment
(1) Collecting video sequences of two traffic scenes, namely a road lamp traffic scene and a non-road lamp traffic scene at night in different time periods in advance, extracting standard deviation and mean value of brightness I of HSI color space in a selected region of interest respectively, performing off-line training on collected illumination information index data samples by using an SVM (support vector machine), and constructing an SVM scene recognition model based on illumination visibility;
(2) and inputting illumination information index data of the traffic scene to be detected into a scene recognition model which is obtained by off-line learning and is based on illumination visibility, and dividing the scene recognition model into a street lamp illumination mode or a street lamp illumination-free mode.
3. The night vehicle video detection method based on illumination visibility recognition according to claim 1, wherein the night vehicle detection in the no-street-light mode comprises the steps of:
1) headlight extraction by background difference method and binary processing
The method comprises the following steps of retaining the remarkable characteristics of the vehicle headlight, eliminating the influence of other light sources, detecting a running vehicle by using the characteristics of the headlight, carrying out image segmentation by a background difference method, selecting a proper threshold value for binarization processing, and extracting to obtain the headlight;
2) detection precision is improved by applying morphological filtering and mathematical morphology processing
In order to extract the headlights to the maximum extent, the binary image is processed by using morphological filtering, so that a few small noise points are removed, and the influence is reduced;
3) based on an 8-connected region method, a region growing algorithm based on template scanning is adopted to obtain a correct moving target and improve the precision of matching and tracking;
4) vehicle detection in a no-road-light mode using vehicle headlamp matching functions
For each headlamp in the detection area, selecting the best match according to a matching distance criterion, finding the headlamp which minimizes the matching function, and determining the headlamp to be the same vehicle, wherein the specific matching function is as follows:
|Ai-Aj|≤ε
|Yi-Yj|≤φ
Figure FSB00000856761700021
wherein: a is the area of the headlight area, Y is the coordinate of the camera in the vertical direction, X is the coordinate of the camera in the horizontal direction, epsilon, phi,
Figure FSB00000856761700022
gamma is a constraint threshold value set in advance;
and (3) using the horizontal distance between the two headlights of each matched vehicle and the average duty ratio of the headlight area as classification features, setting a reasonable classification threshold according to offline vehicle feature acquisition, judging the vehicle types on line, and dividing the vehicles into large-sized vehicles, medium-sized vehicles and small-sized vehicles.
4. The night vehicle video detection method based on illumination visibility recognition according to claim 3, wherein the method based on 8-connected region adopts a region growing algorithm based on template scanning to obtain a correct moving object and improve the accuracy of matching and tracking comprises the following steps:
1) establishing an n multiplied by n scanning template, dividing a foreground image into p multiplied by q n multiplied by n sub-regions, and establishing a p multiplied by q foreground region marking matrix M;
2) scanning p multiplied by q sub-areas on the image one by one according to the sequence from left to right and from top to bottom, and when the number of foreground points in the ith row and the j column area is more than that of the foreground points in the ith row and the j column area
Figure FSB00000856761700023
When M (i, j) is 1, otherwise, M (i, j) is 0, and searching for a connected region from left to right and from top to bottom for the marking matrix M;
3) when M (i, j) is 0, searching the next point in sequence; when M (i, j) ═ 1, the M (i, j) is taken as a seed;
4) for the seed point M (i, j), modifying M (i, j) to 0, and searching a connected region thereof, the rule is as follows: and (3) sequentially judging the values of eight points in the neighborhood, if the upper left point is 1, taking the point as a seed point, and executing the step 4), otherwise, judging the next point, and repeating the steps until the neighborhood has no connected point, taking the connected area as a target area, and executing the step 3).
5. The night vehicle video detection method based on illumination visibility recognition according to claim 1, wherein the vehicle detection in the night streetlight mode comprises the steps of:
1) eliminating influencing factors
Extracting the vehicle body contour as a characteristic value, calculating the reflection intensity of the projection light beams of the headlights and the road surface by utilizing a Retinex algorithm, and then eliminating the influence of the reflection intensity;
2) vehicle detection
The method comprises the steps of realizing multi-region segmentation of a foreground image by adopting a region growing algorithm based on template scanning, and realizing moving vehicle detection in a streetlight mode by adopting a Kalman filtering model;
3) vehicle type judgment based on moment characteristics
The method comprises the steps of realizing judgment of vehicle types in a street lamp mode based on a target feature expression method of the moment features, namely constructing a multi-type vehicle training sample library, extracting eccentric moment features, constructing a vehicle type classification model based on an SVM (support vector machine), inputting vehicle features of a traffic scene to be detected into a vehicle type identification model obtained by off-line learning, identifying the vehicle types on line, and dividing the vehicle types into large-sized vehicles, medium-sized vehicles and small-sized vehicles.
6. The night vehicle video detection method based on illumination visibility recognition according to claim 5, wherein the judging the vehicle type based on the moment feature comprises the following steps:
1) definition of eccentricity
Assuming that there is a spatially discretized image f (x, y) of size M × N, its pq order moment is defined as:
<math> <mrow> <msub> <mi>m</mi> <mi>pq</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mi>i</mi> <mi>p</mi> </msup> <msup> <mi>j</mi> <mi>q</mi> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </math>
then, for the binary image after segmentation, the pixel value f (i, j) of any pixel point in the foreground target region R is 1, and the (p, q) th order moment in R has the following expression:
<math> <mrow> <msub> <mi>m</mi> <mi>pq</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>R</mi> </mrow> <mi>M</mi> </munderover> <mover> <mi>&Sigma;</mi> <mi>N</mi> </mover> <msup> <mi>i</mi> <mi>p</mi> </msup> <msup> <mi>j</mi> <mi>q</mi> </msup> </mrow> </math>
m can be calculated respectively by the above formula00、m01、m10Wherein m is00Number of pixels representing region R, i.e. area, m10,m01Then represents the central moment, then the C coordinate (x) of the center of gravity of the region Rc,yc) Can be expressed as:
<math> <mrow> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>10</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>01</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> </mrow> </math>
in the formula, n is the number of pixel points in the region R,
according to the connected characteristic of the foreground region, sequencing the region contour point set in a counterclockwise order and representing the region contour point set in a vector form: p (p)1,p2,p3...pm) Then each element p in the inventive vectoriThe distance from the region center of gravity C is defined as piEccentric moment d ofiFrom this vector p (p)1,p2,p3...pm) The distance between each element and the center of gravity can form an eccentricity sequence, and the eccentricity sequence is defined as an eccentricity vector of the foreground target area:
D(d1,d2,d3...dm)Twherein: di=dist(C,pi),
Figure FSB00000856761700035
2) Eccentricity moment vector normalization
From the original vector D (D)1,d2,d3...dm)TAccording to the formula
D′(d1,d2,d3...dm)T
<math> <mrow> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>D</mi> <mo>[</mo> <mi>i</mi> <mo>&CenterDot;</mo> <mfrac> <mi>m</mi> <mi>k</mi> </mfrac> <mo>]</mo> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>&Element;</mo> <mi>N</mi> </mrow> </math>
Selecting a certain interval to extract a fixed number of eccentric moment elements capable of reflecting the overall profile characteristics of the object to construct a group of new vectors, and defining the vectors as optimized vectors of the eccentric moment; unifying the eccentric moment vectors of different targets into the same k-dimensional vector by the above formula, wherein
Figure FSB00000856761700042
Value of (a) is less than and closest to
Figure FSB00000856761700043
An integer of (d);
to make elements in each dimension in the eccentric moment vector comparable, an eccentric moment normalization vector is defined:
D″(d1,d2,d3...dk)T
<math> <mrow> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mo>&Element;</mo> <mi>N</mi> </mrow> </math>
in the formula: d' (i) represents the ratio of the feature of the eccentricity of the ith dimension of the target to the sum of the eccentricities of the dimensions;
calculating to obtain three basic characteristics of a target eccentric moment vector mean value, an eccentric moment vector dispersion degree and a maximum and minimum eccentric moment ratio;
the mean value of the eccentricity vectors can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mover> <mi>D</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </math>
the dispersion of the eccentricity vector can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>D</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
the ratio of the maximum minimum eccentric moments can be expressed as:
<math> <mrow> <msub> <mi>M</mi> <mn>3</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mi>MAX</mi> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>MIN</mi> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mo>&prime;</mo> <mo>&prime;</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
in the formula: MAX (D '(i)), MIN (D' (i)) represent the maximum and minimum element values, respectively, in the normalized vector of eccentricity, in M1,M2,M3As the final expression of the morphological feature of interest.
7. The night vehicle video detection method based on illumination visibility recognition according to claim 1, wherein the night vehicle motion tracking comprises the steps of:
1) expression of characteristics
Let M N be a divided binary image, f (i, j) of pixel points in any foreground target R is 1,
the C-coordinate (x) of the center of gravity of Rc,yc) Can be defined as:
<math> <mrow> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>10</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>m</mi> <mn>01</mn> </msub> <msub> <mi>m</mi> <mn>00</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mn>1</mn> <mi>n</mi> </msubsup> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mi>n</mi> </mfrac> </mrow> </math>
in the formula: n is the number of pixels in R, for the sake of simplifying the problem, the invention adopts the gravity center point of the target to represent the target to realize the motion tracking under the image coordinate system, and simultaneously, in order to improve the accuracy of target matching, the area and the perimeter of the target are utilized to form the compactness characteristic, the shape characteristic is constrained, and the compactness S of R is defined as:
S = A L 2
in the formula: a is the area of R, namely the number n of pixel points in R, and L is the perimeter of R, namely the boundary point of the R region; describing the state of the target by using the gravity center, the speed, the compactness and the compactness change of the moving target, wherein the state feature vector of the target at the k moment can be expressed as:
<math> <mrow> <msub> <mi>X</mi> <mi>k</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>C</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>V</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mrow> <mo>&dtri;</mo> <mi>S</mi> </mrow> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
wherein: ck-the centre of gravity at time k; vk-velocity at time k; sk-compactness of time k;
Figure FSB00000856761700055
the closeness at that moment changes;
2) parameter initialization
After the target is stably present in the detection area, the speed of the target is determined by using the gravity center positions of the first two observation moments, namely:
Vx,0=xc,1-xc,0
Vy,0=yc,1-yc,0
wherein, Vx,0,Vy,0Initial speed in x, y directions, respectively, at the first two observations of the targetScale coordinates are respectively C0=(xc,0,yc,0),C1=(xc,1,yc,1) Meanwhile, due to the relative stability of the motion, the compactness of the target in two continuous observation moments cannot be greatly changed, so that the initial compactness change value is determined as follows:
<math> <mrow> <msub> <mrow> <mo>&dtri;</mo> <mi>S</mi> </mrow> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math>
3) state estimation
Matching each segmented foreground target in the current observation time with a marked tracked target, selecting the best matching according to the matching minimum distance criterion by using the state characteristic observation value of the current target and the estimation value of the state characteristic of all the tracked targets at the previous observation time, and finding the target with the minimum matching distance as the tracked target, wherein the tracked target state estimation equation is as follows:
<math> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>&times;</mo> <mi>&Delta;t</mi> <mo>+</mo> <mi>&omega;</mi> </mrow> </math>
<math> <mrow> <msubsup> <mi>y</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>y</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>&times;</mo> <mi>&Delta;t</mi> <mo>+</mo> <mi>&omega;</mi> </mrow> </math>
<math> <mrow> <msubsup> <mi>S</mi> <mi>t</mi> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <msubsup> <mrow> <mo>&dtri;</mo> <mi>S</mi> </mrow> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mi>&xi;</mi> </mrow> </math>
in the formula: at is the interval of adjacent observation times,for the estimated value of barycentric coordinate of the Lth tracked target at the t-th moment,
Figure FSB000008567617000511
for the barycentric coordinates at the t-1 th time of the Lth tracked target,
Figure FSB000008567617000512
predicting the speed in the x and y directions of the next moment after the matching of the t-1 moment is established,
Figure FSB000008567617000513
for the tightness estimate at time t of the tracked target,
Figure FSB00000856761700061
for the closeness at time t-1 of the tracked object,the predicted compactness change value of the t-1 th observation moment to the next moment is shown, and omega and xi are estimation errors;
4) feature matching and updating
For the target successfully matched, calculating the matching error of the estimated value and the observed value, and predicting the speed of the target at the time tAnd compactness variationUpdating:
<math> <mrow> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <msubsup> <mi>&alpha;V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>V</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&alpha;</mi> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>V</mi> <mrow> <mi>y</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mrow> <mo>&dtri;</mo> <mi>S</mi> </mrow> <mi>t</mi> <mi>L</mi> </msubsup> <mo>=</mo> <mi>&beta;</mi> <mo>&dtri;</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&beta;</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>S</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
in the formula:
Figure FSB00000856761700068
the x and y direction velocity values of the object successfully matched with the Lth tracked target at the current observation time,
Figure FSB00000856761700069
alpha and beta are constants from 0 to 1 respectively for the compactness of the object successfully matched with the Lth tracked target at the current observation time.
8. The night vehicle video detection method based on illumination visibility recognition according to claim 1, wherein the night vehicle traffic parameter extraction comprises the steps of: according to a projection relation model of the image coordinates and the world coordinates, effective collection of vehicle running speed parameters is achieved by adopting a two-dimensional reconstruction algorithm based on black box calibration;
1) camera calibration
The projection relationship model relationship of image coordinates and world coordinates can be expressed as:
sm=pM
in the formula: s is a scale factor other than 0, M is a three-dimensional world homogeneous coordinate, and M ═ X Y Z1]TM is the homogeneous coordinate of the two-dimensional image, and m is [ u v 1 ]]TP is a mapping transformation matrix of three times four;
the black box calibration method only needs to solve a mapping transformation matrix p from three dimensions to two dimensions, and if a model plane is located on a plane of a world coordinate system Z equal to 0, the formula sm equal to pM is changed into the following form:
s u v 1 = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 X Y 1
then: u = p 1 X + p 2 Y + p 3 p 7 X + p 8 Y + p 9
v = p 4 X + p 5 Y + p 6 p 7 X + p 8 Y + p 9
changing the above equation to:
p1X+p2Y+p3-uXp7-uYp8=up9
p4X+p5Y+p6-vXp7-vYp8=vp9
since the mapping transformation matrix p is multiplied by an arbitrary constant other than 0, the relationship between the world coordinates and the image coordinates is not affected, and therefore p is not assumed9If there are n pairs (X) of 1i,Yi),(ui,vi) The corresponding points obtain 2n linear equations about other elements of the P matrix, wherein n is more than or equal to 4, and the linear equations are expressed in the form of the matrix as follows:
X 1 Y 1 1 0 0 0 - u 1 X 1 - u 1 Y 1 0 0 0 X 1 Y 1 1 - v 1 X 1 - v 1 Y 1 M M M M M M M M X n Y n 1 0 0 0 - u n X n - u n Y n 0 0 0 X n Y n 1 - v n X n - v n Y n p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 = u 1 v 1 M u n v n
the above equation is written as: AP ═ U, where: a is 2n × 8 dimensional matrix, P is 8 × 1 dimensional matrix, U is 2n × 1 dimensional matrix, n is number of calibration feature points, and at least four pairs (X) are usedi,Yi),(ui,vi) The corresponding points can be solved by adopting a least square method: p ═ ATA)-1ATU;
2) Obtaining the position and the actual speed of the vehicle through two-dimensional reconstruction
The two-dimensional reconstruction is to solve the position of the target object in the actual scene under the condition of the known image position of the target object, namely the inverse process of camera calibration; calibrating the camera by using a black box calibration method to obtain a mapping transformation matrix p, and further reconstructing the actual position coordinates of the points on the plane where Z is 0 to obtain:
X i = ( p 6 p 8 - p 5 p 9 ) u i + ( p 2 p 9 - p 3 p 8 ) v i + ( p 3 p 5 - p 2 p 6 ) ( p 5 p 7 - p 4 p 8 ) u i + ( p 1 p 8 - p 2 p 7 ) v i + ( p 2 p 4 - p 1 p 5 )
Y i = ( p 4 p 9 - p 6 p 7 ) u i + ( p 3 p 7 - p 1 p 9 ) v i + ( p 1 p 6 - p 3 p 4 ) ( p 5 p 7 - p 4 p 8 ) u i + ( p 1 p 8 - p 2 p 7 ) v i + ( p 2 p 4 - p 1 p 5 )
determining the coordinates of the actual space from the image coordinates; obtaining the position of the vehicle under an image coordinate system through motion tracking, and obtaining the position of the vehicle under an actual space coordinate system through two-dimensional reconstruction; and dividing the positions of the front sampling moment and the rear sampling moment by the sampling interval to obtain the actual speed of the vehicle.
CN201010505792A 2010-10-14 2010-10-14 Night vehicle video detection method based on illumination visibility identification Expired - Fee Related CN102044151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010505792A CN102044151B (en) 2010-10-14 2010-10-14 Night vehicle video detection method based on illumination visibility identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010505792A CN102044151B (en) 2010-10-14 2010-10-14 Night vehicle video detection method based on illumination visibility identification

Publications (2)

Publication Number Publication Date
CN102044151A CN102044151A (en) 2011-05-04
CN102044151B true CN102044151B (en) 2012-10-17

Family

ID=43910255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010505792A Expired - Fee Related CN102044151B (en) 2010-10-14 2010-10-14 Night vehicle video detection method based on illumination visibility identification

Country Status (1)

Country Link
CN (1) CN102044151B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385753B (en) * 2011-11-17 2013-10-23 江苏大学 Illumination-classification-based adaptive image segmentation method
CN107066962B (en) * 2012-01-17 2020-08-07 超级触觉资讯处理有限公司 Enhanced contrast for object detection and characterization by optical imaging
CN102740057B (en) * 2012-04-18 2016-02-03 杭州道联电子技术有限公司 A kind of image determination method for city illumination facility and device
CN102665365A (en) * 2012-05-29 2012-09-12 广州中国科学院软件应用技术研究所 Coming vehicle video detection based streetlamp control and management system
CN102665363B (en) * 2012-05-29 2014-08-13 广州中国科学院软件应用技术研究所 Coming vehicle video detection based streetlamp control device and coming vehicle video detection based streetlamp control method
CN103246894B (en) * 2013-04-23 2016-01-13 南京信息工程大学 A kind of ground cloud atlas recognition methods solving illumination-insensitive problem
CN103366571B (en) * 2013-07-03 2016-02-24 河南中原高速公路股份有限公司 The traffic incidents detection method at night of intelligence
CN103886292B (en) * 2014-03-20 2017-02-08 杭州电子科技大学 Night vehicle target stable tracking method based on machine vision
CN103886760B (en) * 2014-04-02 2016-09-21 李涛 Real-time vehicle detecting system based on traffic video
CN103903445A (en) * 2014-04-22 2014-07-02 北京邮电大学 Vehicle queuing length detection method and system based on video
CN105719297A (en) * 2016-01-21 2016-06-29 中国科学院深圳先进技术研究院 Object cutting method and device based on video
CN105718893B (en) * 2016-01-22 2019-01-08 江苏大学 A kind of light for vehicle for night-environment is to detection method
CN105760847B (en) * 2016-03-01 2019-04-02 江苏大学 A kind of visible detection method of pair of helmet of motorcycle driver wear condition
JP6611353B2 (en) * 2016-08-01 2019-11-27 クラリオン株式会社 Image processing device, external recognition device
CN106407951B (en) * 2016-09-30 2019-08-16 西安理工大学 A kind of night front vehicles detection method based on monocular vision
CN106548636A (en) * 2016-12-12 2017-03-29 青岛亮佳美智能科技有限公司 A kind of real-time misty rain warning system
CN106778646A (en) * 2016-12-26 2017-05-31 北京智芯原动科技有限公司 Model recognizing method and device based on convolutional neural networks
CN108256386A (en) * 2016-12-28 2018-07-06 南宁市浩发科技有限公司 The vehicle detection at night method of adaptive features select
CN106778693A (en) * 2017-01-17 2017-05-31 陕西省地质环境监测总站 A kind of debris flow monitoring pre-warning method and monitoring and warning equipment based on video analysis
CN106931902B (en) * 2017-01-19 2018-11-13 浙江工业大学 Ambient light intensity self-adaptive adjusting method for digital image correlation test
CN108538051A (en) * 2017-03-03 2018-09-14 防城港市港口区思达电子科技有限公司 A kind of night movement vehicle checking method
DE102017119394A1 (en) * 2017-08-28 2019-02-28 HELLA GmbH & Co. KGaA Method for controlling at least one light module of a lighting unit of a vehicle, lighting unit, computer program product and computer-readable medium
CN107992810B (en) * 2017-11-24 2020-12-29 智车优行科技(北京)有限公司 Vehicle identification method and device, electronic equipment and storage medium
WO2019180948A1 (en) * 2018-03-23 2019-09-26 本田技研工業株式会社 Object recognition device, vehicle, and object recognition method
CN108573223B (en) * 2018-04-03 2021-11-23 同济大学 Motor train unit operation environment sensing method based on pantograph-catenary video
CN108734162B (en) * 2018-04-12 2021-02-09 上海扩博智能技术有限公司 Method, system, equipment and storage medium for identifying target in commodity image
CN108875736B (en) * 2018-06-07 2021-03-30 南昌工程学院 Water surface moving target detection method based on background prediction
CN109684996B (en) * 2018-12-22 2020-12-04 北京工业大学 Real-time vehicle access identification method based on video
CN111553181A (en) * 2019-02-12 2020-08-18 上海欧菲智能车联科技有限公司 Vehicle-mounted camera semantic recognition method, system and device
CN111695389B (en) * 2019-03-15 2023-06-20 北京四维图新科技股份有限公司 Lane line clustering method and device
CN110363989A (en) * 2019-07-11 2019-10-22 汉王科技股份有限公司 Magnitude of traffic flow detection method, device, electronic equipment and storage medium
CN110765929A (en) * 2019-10-21 2020-02-07 东软睿驰汽车技术(沈阳)有限公司 Vehicle obstacle detection method and device
CN112330981B (en) * 2020-10-16 2023-04-18 青岛博瑞斯自动化技术有限公司 Ship-shore communication management system and method based on Internet of things
CN112287861A (en) * 2020-11-05 2021-01-29 山东交通学院 Road information enhancement and driving early warning method based on night environment perception
CN112528056B (en) * 2020-11-29 2021-09-07 枞阳县中邦科技信息咨询有限公司 Double-index field data retrieval system and method
CN112990128A (en) * 2021-04-27 2021-06-18 电子科技大学 Multi-vehicle speed measuring method based on video tracking
CN113420682B (en) * 2021-06-28 2023-08-15 阿波罗智联(北京)科技有限公司 Target detection method and device in vehicle-road cooperation and road side equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897015A (en) * 2006-05-18 2007-01-17 王海燕 Method and system for inspecting and tracting vehicle based on machine vision
US8098889B2 (en) * 2007-01-18 2012-01-17 Siemens Corporation System and method for vehicle detection and tracking
CN101308607A (en) * 2008-06-25 2008-11-19 河海大学 Moving target tracking method by multiple features integration under traffic environment based on video

Also Published As

Publication number Publication date
CN102044151A (en) 2011-05-04

Similar Documents

Publication Publication Date Title
CN102044151B (en) Night vehicle video detection method based on illumination visibility identification
CN111310574B (en) Vehicle-mounted visual real-time multi-target multi-task joint sensing method and device
CN106204572B (en) Road target depth estimation method based on scene depth mapping
Diaz-Cabrera et al. Robust real-time traffic light detection and distance estimation using a single camera
US8611585B2 (en) Clear path detection using patch approach
US9852357B2 (en) Clear path detection using an example-based approach
US8452053B2 (en) Pixel-based texture-rich clear path detection
US8670592B2 (en) Clear path detection using segmentation-based method
US8634593B2 (en) Pixel-based texture-less clear path detection
US8332134B2 (en) Three-dimensional LIDAR-based clear path detection
US8487991B2 (en) Clear path detection using a vanishing point
Lieb et al. Adaptive Road Following using Self-Supervised Learning and Reverse Optical Flow.
CN102509098B (en) Fisheye image vehicle identification method
Kühnl et al. Monocular road segmentation using slow feature analysis
EP3735675A1 (en) Image annotation
Hu et al. A multi-modal system for road detection and segmentation
US20100097457A1 (en) Clear path detection with patch smoothing approach
CN101976504B (en) Multi-vehicle video tracking method based on color space information
Nguyen et al. Compensating background for noise due to camera vibration in uncalibrated-camera-based vehicle speed measurement system
CN111340855A (en) Road moving target detection method based on track prediction
Song et al. Image-based traffic monitoring with shadow suppression
Jiang et al. Moving object detection by 3D flow field analysis
Sarlin et al. Snap: Self-supervised neural maps for visual positioning and semantic understanding
CN113221739B (en) Monocular vision-based vehicle distance measuring method
Thomas et al. Fast approach for moving vehicle localization and bounding box estimation in highway traffic videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121017

Termination date: 20151014

EXPY Termination of patent right or utility model