CN111914627A - Vehicle identification and tracking method and device - Google Patents

Vehicle identification and tracking method and device Download PDF

Info

Publication number
CN111914627A
CN111914627A CN202010562609.3A CN202010562609A CN111914627A CN 111914627 A CN111914627 A CN 111914627A CN 202010562609 A CN202010562609 A CN 202010562609A CN 111914627 A CN111914627 A CN 111914627A
Authority
CN
China
Prior art keywords
target
vehicle
video image
target vehicle
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010562609.3A
Other languages
Chinese (zh)
Inventor
林凡
张秋镇
陈健民
周芳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GCI Science and Technology Co Ltd
Original Assignee
GCI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GCI Science and Technology Co Ltd filed Critical GCI Science and Technology Co Ltd
Priority to CN202010562609.3A priority Critical patent/CN111914627A/en
Publication of CN111914627A publication Critical patent/CN111914627A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention discloses a vehicle identification and tracking method and device. The vehicle identification and tracking method comprises the following steps: detecting a target vehicle from the obtained frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image; extracting a vertex in the target vehicle image according to a vertex detection algorithm, and taking the vertex as a target feature point; and calculating the motion position of the target feature point in the next frame of video image according to a target pixel instantaneous speed estimation algorithm. The invention can stably and accurately identify and track the target vehicle in a multi-vehicle environment.

Description

Vehicle identification and tracking method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a vehicle identification and tracking method and device.
Background
With the rapid increase of the automobile holding capacity, the occurrence amount of cases such as vehicle theft, vehicle robbery and the like is increased day by day, and economic loss is brought to the automobile owners. Currently, vehicles are retrieved primarily by identifying and tracking the vehicles.
Most of the vehicle identification and tracking technologies proposed in recent years focus on feature comparison of different vehicle images, for example, CN201810098521.3 is a vehicle tracking system, which extracts vehicle features such as license plate numbers from collected vehicle images and compares the vehicle features with vehicle images in an image library to determine whether a vehicle is a tracked vehicle. However, in practical applications, the target vehicle cannot be identified and tracked stably and accurately due to interference of a multi-vehicle environment.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a vehicle identification and tracking method and device, which can stably and accurately identify and track a target vehicle in a multi-vehicle environment.
In order to solve the above technical problems, in a first aspect, an embodiment of the present invention provides a vehicle identification and tracking method, including:
detecting a target vehicle from the obtained frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image;
extracting a vertex in the target vehicle image according to a vertex detection algorithm, and taking the vertex as a target feature point;
and calculating the motion position of the target feature point in the next frame of video image according to a target pixel instantaneous speed estimation algorithm.
Further, before the detecting a target vehicle from the obtained frame of video image according to the vehicle detection and identification algorithm to obtain a target vehicle image, the method further includes:
and acquiring an original video image, and performing gray processing on the original video image to obtain the video image.
Further, the detecting a target vehicle from the obtained frame of video image according to a vehicle detection recognition algorithm to obtain a target vehicle image specifically includes:
traversing the video image by using a first preset window, and comparing RGB channel values of pixel points in a window coverage area in the video image with RGB channel values in a feature pool to obtain a comparison result;
and judging whether the window coverage area is a target vehicle area or not according to the comparison result, and if so, taking the image of the window coverage area as the target vehicle image.
Further, the extracting a vertex in the target vehicle image according to a vertex detection algorithm, and taking the vertex as a target feature point specifically includes:
traversing the target vehicle image by using a second preset window, and detecting pixel points of a window coverage area in the target vehicle image;
and comparing the gray difference values of the pixel points of the window coverage area before and after detection, and taking the corresponding pixel point as a vertex when the gray difference value is larger than a preset threshold value to obtain the target characteristic point.
Further, the calculating the motion position of the target feature point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm specifically includes:
calculating a horizontal velocity component and a vertical velocity component of the target feature point according to a weighted least square method;
and calculating the motion position of the target characteristic point in the next frame of video image according to the horizontal velocity component and the vertical velocity component.
In a second aspect, an embodiment of the present invention provides a vehicle identification and tracking apparatus, including:
the target vehicle detection module is used for detecting a target vehicle from the acquired frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image;
the target feature point extraction module is used for extracting a vertex in the target vehicle image according to a vertex detection algorithm and taking the vertex as a target feature point;
and the motion position calculation module is used for calculating the motion position of the target characteristic point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm.
Further, the target vehicle detection module is further configured to, before detecting a target vehicle from the acquired frame of video image according to the vehicle detection recognition algorithm to obtain a target vehicle image, acquire an original video image, and perform gray processing on the original video image to obtain the video image.
Further, the detecting a target vehicle from the obtained frame of video image according to a vehicle detection recognition algorithm to obtain a target vehicle image specifically includes:
traversing the video image by using a first preset window, and comparing RGB channel values of pixel points in a window coverage area in the video image with RGB channel values in a feature pool to obtain a comparison result;
and judging whether the window coverage area is a target vehicle area or not according to the comparison result, and if so, taking the image of the window coverage area as the target vehicle image.
Further, the extracting a vertex in the target vehicle image according to a vertex detection algorithm, and taking the vertex as a target feature point specifically includes:
traversing the target vehicle image by using a second preset window, and detecting pixel points of a window coverage area in the target vehicle image;
and comparing the gray difference values of the pixel points of the window coverage area before and after detection, and taking the corresponding pixel point as a vertex when the gray difference value is larger than a preset threshold value to obtain the target characteristic point.
Further, the calculating the motion position of the target feature point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm specifically includes:
calculating a horizontal velocity component and a vertical velocity component of the target feature point according to a weighted least square method;
and calculating the motion position of the target characteristic point in the next frame of video image according to the horizontal velocity component and the vertical velocity component.
The embodiment of the invention has the following beneficial effects:
the method comprises the steps of detecting a target vehicle from an obtained frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image, extracting a vertex in the target vehicle image according to a vertex detection algorithm, taking the vertex as a target feature point, and finally calculating the motion position of the target feature point in the next frame of video image according to a target pixel instantaneous speed estimation algorithm to finish the identification and tracking of the target vehicle. Compared with the prior art, the embodiment of the invention can eliminate the interference of other vehicles in the video image by detecting the target vehicle from the video image to obtain the target vehicle image, directly extract the target characteristic point from the target vehicle image, and calculate the motion position of the target characteristic point in the next frame of video image only based on the target characteristic point, thereby greatly reducing the calculation amount, improving the recognition and tracking efficiency and realizing the stable and accurate recognition and tracking of the target vehicle in a multi-vehicle environment.
Drawings
FIG. 1 is a flow chart illustrating a vehicle identification and tracking method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a vertex detection method according to a first embodiment of the present invention;
FIG. 3 is a schematic flow chart of a vehicle identification and tracking method according to a first embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle identification and tracking device according to a second embodiment of the invention.
Detailed Description
The technical solutions in the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, the step numbers in the text are only for convenience of explanation of the specific embodiments, and do not serve to limit the execution sequence of the steps. The method provided by the embodiment can be executed by the relevant server, and the server is taken as an example for explanation below.
Please refer to fig. 1-3.
As shown in fig. 1, a first embodiment provides a vehicle identification and tracking method, including steps S1 to S3:
and S1, detecting the target vehicle from the acquired frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image.
And S2, extracting the vertex in the target vehicle image according to the vertex detection algorithm, and taking the vertex as the target characteristic point.
And S3, calculating the motion position of the target feature point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm.
Illustratively, in step S1, when the video stream is captured by the camera, the target vehicle is detected from the acquired one-frame video image according to a vehicle detection recognition algorithm, and an image of the target vehicle area is divided to obtain a target vehicle image.
In step S2, when the target vehicle image is obtained, the vertices in the target vehicle image are extracted according to the vertex detection algorithm and stored as the target feature points.
In step S3, after the target feature point is obtained, the motion position of the target feature point in the next frame of video image is calculated according to the target pixel instantaneous speed estimation algorithm, so as to track the target vehicle in the next frame of video image.
The target pixel instantaneous speed is the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane, and the target pixel instantaneous speed estimation algorithm is a method for finding the corresponding relation between the previous frame and the current frame by using the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames. The target pixel instantaneous speed algorithm used in the present embodiment belongs to a sparse target pixel instantaneous speed estimation algorithm. The sparse target pixel instantaneous velocity estimation algorithm considers that velocity vectors of all pixel points of a plane image form a target pixel instantaneous velocity field, and when an object moves continuously, position coordinates of the pixel points on the corresponding image change, and the target pixel instantaneous velocity field also changes correspondingly.
Assuming that the brightness of a certain point coordinate (x, y) at time t is I (x, y, t), the brightness changes to I (x + Δ x, y + Δ y, t + Δ t) after time Δ t, and when Δ t tends to be infinite, the brightness of the point is considered to be unchanged, i.e., the brightness of the point is considered to be unchanged
Δ t → 0, I (x, y, t) ═ I (x + Δ x, y + Δ y, t + Δ t) (1)
Taylor expansion is carried out on the formula (1), an extreme value is taken, and the obtained basic formula of the instantaneous speed calculation of the target pixel is
Ixu+Iyv+It=0 (2)
In the formula (2), the reaction mixture is,
Figure BDA0002546088920000051
and u and v are velocity components of the pixel points in the x and y directions in the target pixel instantaneous velocity field.
And (6) obtaining the motion speed of the point through u and v, and measuring and calculating the motion direction of the point and the position of the next moment.
According to the embodiment, a target vehicle is detected from an obtained frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image, a vertex in the target vehicle image is extracted according to a vertex detection algorithm and is used as a target feature point, and finally the motion position of the target feature point in the next frame of video image is calculated according to a target pixel instantaneous speed estimation algorithm to finish the identification and tracking of the target vehicle. According to the embodiment, the target vehicle is detected from the video image to obtain the target vehicle image, so that the interference of other vehicles in the video image can be eliminated, the target feature point is directly extracted from the target vehicle image, the motion position of the target feature point in the next frame of video image is calculated only on the basis of the target feature point, the calculation amount can be greatly reduced, the recognition and tracking efficiency is improved, and the target vehicle can be stably and accurately recognized and tracked in a multi-vehicle environment.
In a preferred embodiment, before the detecting a target vehicle from a frame of acquired video images according to a vehicle detection recognition algorithm to obtain a target vehicle image, the method further includes: and acquiring an original video image, and performing gray level processing on the original video image to obtain a video image.
The present embodiment is advantageous to ensure accurate detection of the target vehicle by detecting the target vehicle from the original video image, i.e., the video image, subjected to the grayscale processing.
In a preferred embodiment, the detecting a target vehicle from the acquired frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image specifically includes: traversing the video image by using a first preset window, and comparing RGB channel values of pixel points in a window coverage area in the video image with RGB channel values in a feature pool to obtain a comparison result; and judging whether the window coverage area is the target vehicle area or not according to the comparison result, and if so, taking the image of the window coverage area as the target vehicle image.
Considering that the probability of the target vehicle appearing in any region of the video image is the same, the present embodiment is beneficial to ensuring accurate detection of the target vehicle by traversing the video image using the window with a fixed size, i.e., the first preset window, in the target vehicle detection process.
In a preferred embodiment, the extracting, according to a vertex detection algorithm, a vertex in the target vehicle image, and taking the vertex as a target feature point specifically includes: traversing the target vehicle image by using a second preset window, and detecting pixel points of a window coverage area in the target vehicle image; and comparing the gray difference values of the pixel points of the window coverage areas before and after detection, and taking the corresponding pixel point as a vertex when the gray difference value is larger than a preset threshold value to obtain a target characteristic point.
The peak detection algorithm is to detect the target vehicle image through a window with a fixed size, namely a second preset window, compare the change degrees of the pixel gray values in the windows before and after detection, and if the gray value of the point has a larger difference with the gray value of the surrounding image, the point is considered as the peak.
Wherein, the detection process is as follows:
the pixel point in the window passes through a linear smooth filtering formula
Figure BDA0002546088920000071
The gray value after convolution operation is
Figure BDA0002546088920000072
In formula (4): a is the moving amount of the window along the x direction; b is the movement of the window in the y direction; (a, b) is the amount of movement of the window; (x, y) is the coordinates of the corresponding pixel points in the window; i (x, y) represents the gray-scale value before the window is moved, and I (x + a, y + b) represents the gray-scale value after the window is moved.
The process of choosing the appropriate point in E (a, b) as the vertex is as follows:
taylor expansion is performed on I (x + a, y + b) and high order infinitesimal is omitted, there
Figure BDA0002546088920000073
Figure BDA0002546088920000074
2 eigenvalues λ of the matrix M1=Ix 2,λ2=Iy 2The curvature magnitude is reflected in the function E (a, b).
The basic principle of the vertex detection algorithm is shown in fig. 2. If the 2 characteristic values are small, the gray value in the window area tends to be constant, the gray change is not obvious, and the gray value is not suitable for being used as a target characteristic point; if one of the 2 characteristic values is larger and the other is smaller, the point is in the edge area of the image, namely the gray value change along one direction is obvious, while the gray value change along the other direction is not obvious and is not suitable to be used as the target characteristic point; if the 2 feature values are large, the gray scale change of the window along any direction is obvious, and the window is suitable as the target feature point.
Solving vertices by introducing response functions, i.e.
R=detM-ktr2M (7)
In the formula (7), detM ═ λ1λ2=Ix 2Iy 2-(IxIy)2;trM=λ12=Ix 2+Iy 2(ii) a detM is a matrix determinant; trM is the trace of the matrix; k is a correction coefficient, and k is 0.04-0.06.
Calculating R value by formula (7), setting corresponding threshold value T, and indicating 2 characteristic values lambda when R > T1、λ2Large enough and takes this point as a feature point candidate. Candidate points of the feature points are detected using a window of fixed size, for example, 3 × 3, and the maximum value is selected as the vertex of the window, i.e., the target feature point.
However, when some point in the limited domain Ω violates the target pixel instantaneous speed condition or the limited domain motion discontinuity, such as the appearance of shadow, sudden dimming of light, etc., the solution error obtained increases. And screening out the characteristic points meeting the constraint conditions to solve the stable target pixel instantaneous velocity vector.
The x, y are biased and written in matrix form based on the target pixel instantaneous velocity equation (2):
Figure BDA0002546088920000081
define the matrix as
Figure BDA0002546088920000082
The conditional number of the matrix is
Figure BDA0002546088920000083
In formula (10), λmaxAnd λminThe maximum eigenvalue and the minimum eigenvalue of the matrix H, respectively.
Calculating the rank and condition number of each point corresponding matrix, setting the allowable value sigma of the rank according to the condition number, considering the points larger than the allowable value as reliable characteristic points, normalizing the condition number, and taking the reciprocal as the weight of the characteristic point, namely
Figure BDA0002546088920000084
And finally, solving u and v values of the characteristic points according to a weighted least square method.
In a preferred embodiment, the calculating the motion position of the target feature point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm specifically includes: calculating a horizontal velocity component and a vertical velocity component of the target feature point according to a weighted least square method; and calculating the motion position of the target characteristic point in the next frame of video image according to the horizontal velocity component and the vertical velocity component.
As shown in fig. 3, as an example, after the camera inputs the 1 st frame of video image and performs the gray processing, the tracking area is determined according to the vehicle detection and recognition algorithm, and then the detection of the feature points in the image of the area to be tracked is started according to the vertex detection algorithm, and the feature points are drawn and stored. And when the 2 nd frame gray image is input, according to a target pixel instantaneous speed estimation algorithm, solving u and v values by using a weighted least square method, and calculating the position where the characteristic point of the next frame video image can appear. And then, the vertex detection algorithm calculates new characteristic points based on the new video image, replaces the original characteristic point data, calculates the positions of the characteristic points on the next frame of video image according to the target pixel instantaneous speed estimation algorithm, and leads the characteristic points to track to the position. And repeating the iteration, measuring and calculating the characteristic points in real time and tracking the characteristic points to accelerate the tracking speed of the camera.
Please refer to fig. 4.
As shown in fig. 4, a second embodiment provides a vehicle identification and tracking device, including: the target vehicle detection module 21 is configured to detect a target vehicle from the acquired frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image; the target feature point extraction module 22 is configured to extract a vertex in the target vehicle image according to a vertex detection algorithm, and use the vertex as a target feature point; and the motion position calculation module 23 is configured to calculate a motion position of the target feature point in the next frame of video image according to a target pixel instantaneous speed estimation algorithm.
Illustratively, by the target vehicle detection module 21, when the video stream is captured by the camera, the target vehicle is detected from the acquired one-frame video image according to a vehicle detection recognition algorithm, and an image of the target vehicle area is divided to obtain a target vehicle image.
Through the target feature point extraction module 22, after the target vehicle image is obtained, the vertex in the target vehicle image is extracted according to the vertex detection algorithm, and the vertex is stored as the target feature point.
Through the motion position calculation module 23, after the target feature point is obtained, the motion position of the target feature point in the next frame of video image is calculated according to the target pixel instantaneous speed estimation algorithm, so as to track the target vehicle in the next frame of video image.
The target pixel instantaneous speed is the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane, and the target pixel instantaneous speed estimation algorithm is a method for finding the corresponding relation between the previous frame and the current frame by using the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames. The target pixel instantaneous speed algorithm used in the present embodiment belongs to a sparse target pixel instantaneous speed estimation algorithm. The sparse target pixel instantaneous velocity estimation algorithm considers that velocity vectors of all pixel points of a plane image form a target pixel instantaneous velocity field, and when an object moves continuously, position coordinates of the pixel points on the corresponding image change, and the target pixel instantaneous velocity field also changes correspondingly.
Assuming that the brightness of a certain point coordinate (x, y) at time t is I (x, y, t), the brightness changes to I (x + Δ x, y + Δ y, t + Δ t) after time Δ t, and when Δ t tends to be infinite, the brightness of the point is considered to be unchanged, i.e., the brightness of the point is considered to be unchanged
Δ t → 0, I (x, y, t) ═ I (x + Δ x, y + Δ y, t + Δ t) (12)
Taylor expansion of the formula (12) is carried out, an extreme value is taken, and the obtained basic formula of the instantaneous speed calculation of the target pixel is
Ixu+Iyv+It=0 (13)
In the formula (13), the reaction mixture is,
Figure BDA0002546088920000101
and u and v are velocity components of the pixel points in the x and y directions in the target pixel instantaneous velocity field.
And (6) obtaining the motion speed of the point through u and v, and measuring and calculating the motion direction of the point and the position of the next moment.
In the embodiment, a target vehicle is detected from an acquired frame of video image through a target vehicle detection module 21 according to a vehicle detection recognition algorithm to obtain a target vehicle image, a vertex in the target vehicle image is extracted through a target feature point extraction module 22 according to a vertex detection algorithm and is used as a target feature point, and finally, a motion position of the target feature point in the next frame of video image is calculated through a motion position calculation module 23 according to a target pixel instantaneous speed estimation algorithm to complete the identification and tracking of the target vehicle. According to the embodiment, the target vehicle is detected from the video image to obtain the target vehicle image, so that the interference of other vehicles in the video image can be eliminated, the target feature point is directly extracted from the target vehicle image, the motion position of the target feature point in the next frame of video image is calculated only on the basis of the target feature point, the calculation amount can be greatly reduced, the recognition and tracking efficiency is improved, and the target vehicle can be stably and accurately recognized and tracked in a multi-vehicle environment.
In a preferred embodiment, the target vehicle detecting module 21 is further configured to, before the target vehicle is detected from the acquired frame of video image according to the vehicle detection and identification algorithm to obtain the target vehicle image, acquire an original video image, and perform gray processing on the original video image to obtain the video image.
In the embodiment, the target vehicle detection module 21 detects the target vehicle from the original video image subjected to the gray processing, i.e. the video image, so that it is beneficial to ensure that the target vehicle is accurately detected.
In a preferred embodiment, the detecting a target vehicle from the acquired frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image specifically includes: traversing the video image by using a first preset window, and comparing RGB channel values of pixel points in a window coverage area in the video image with RGB channel values in a feature pool to obtain a comparison result; and judging whether the window coverage area is the target vehicle area or not according to the comparison result, and if so, taking the image of the window coverage area as the target vehicle image.
In consideration of the fact that the probability of the target vehicle appearing in any region of the video image is the same, in the embodiment, the target vehicle detection module 21 is used, and in the target vehicle detection process, the window with the fixed size, namely the first preset window, is used for traversing the video image, so that the target vehicle can be accurately detected.
In a preferred embodiment, the extracting, according to a vertex detection algorithm, a vertex in the target vehicle image, and taking the vertex as a target feature point specifically includes: traversing the target vehicle image by using a second preset window, and detecting pixel points of a window coverage area in the target vehicle image; and comparing the gray difference values of the pixel points of the window coverage areas before and after detection, and taking the corresponding pixel point as a vertex when the gray difference value is larger than a preset threshold value to obtain a target characteristic point.
The peak detection algorithm is to detect the target vehicle image through a window with a fixed size, namely a second preset window, compare the change degrees of the pixel gray values in the windows before and after detection, and if the gray value of the point has a larger difference with the gray value of the surrounding image, the point is considered as the peak.
Wherein, the detection process is as follows:
the pixel point in the window passes through a linear smooth filtering formula
Figure BDA0002546088920000111
The gray value after convolution operation is
Figure BDA0002546088920000121
In formula (15): a is the moving amount of the window along the x direction; b is the movement of the window in the y direction; (a, b) is the amount of movement of the window; (x, y) is the coordinates of the corresponding pixel points in the window; i (x, y) represents the gray-scale value before the window is moved, and I (x + a, y + b) represents the gray-scale value after the window is moved.
The process of choosing the appropriate point in E (a, b) as the vertex is as follows:
taylor expansion is performed on I (x + a, y + b) and high order infinitesimal is omitted, there
Figure BDA0002546088920000122
Figure BDA0002546088920000123
2 eigenvalues λ of the matrix M1=Ix 2,λ2=Iy 2The curvature magnitude is reflected in the function E (a, b).
If the 2 characteristic values are small, the gray value in the window area tends to be constant, the gray change is not obvious, and the gray value is not suitable for being used as a target characteristic point; if one of the 2 characteristic values is larger and the other is smaller, the point is in the edge area of the image, namely the gray value change along one direction is obvious, while the gray value change along the other direction is not obvious and is not suitable to be used as the target characteristic point; if the 2 feature values are large, the gray scale change of the window along any direction is obvious, and the window is suitable as the target feature point.
Solving vertices by introducing response functions, i.e.
R=detM-ktr2M (18)
In formula (18), detM ═ λ1λ2=Ix 2Iy 2-(IxIy)2;trM=λ12=Ix 2+Iy 2(ii) a detM is a matrix determinant; trM is the trace of the matrix; k is a correction coefficient, and k is 0.04-0.06.
The value of R is determined by the formula (18), and a corresponding threshold value T is set, when R > T, 2 characteristic values lambda are indicated1、λ2Large enough and takes this point as a feature point candidate. Candidate points of the feature points are detected using a window of fixed size, for example, 3 × 3, and the maximum value is selected as the vertex of the window, i.e., the target feature point.
However, when some point in the limited domain Ω violates the target pixel instantaneous speed condition or the limited domain motion discontinuity, such as the appearance of shadow, sudden dimming of light, etc., the solution error obtained increases. And screening out the characteristic points meeting the constraint conditions to solve the stable target pixel instantaneous velocity vector.
The x, y are biased and written in matrix form based on the target pixel instantaneous velocity equation (13):
Figure BDA0002546088920000131
define the matrix as
Figure BDA0002546088920000132
The conditional number of the matrix is
Figure BDA0002546088920000133
In formula (21), λmaxAnd λminThe maximum eigenvalue and the minimum eigenvalue of the matrix H, respectively.
Calculating the rank and condition number of each point corresponding matrix, setting the allowable value sigma of the rank according to the condition number, considering the points larger than the allowable value as reliable characteristic points, normalizing the condition number, and taking the reciprocal as the weight of the characteristic point, namely
Figure BDA0002546088920000134
And finally, solving u and v values of the characteristic points according to a weighted least square method.
In a preferred embodiment, the calculating the motion position of the target feature point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm specifically includes: calculating a horizontal velocity component and a vertical velocity component of the target feature point according to a weighted least square method; and calculating the motion position of the target characteristic point in the next frame of video image according to the horizontal velocity component and the vertical velocity component.
In summary, the embodiment of the present invention has the following advantages:
the method comprises the steps of detecting a target vehicle from an obtained frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image, extracting a vertex in the target vehicle image according to a vertex detection algorithm, taking the vertex as a target feature point, and finally calculating the motion position of the target feature point in the next frame of video image according to a target pixel instantaneous speed estimation algorithm to finish the identification and tracking of the target vehicle. According to the embodiment of the invention, the target vehicle is detected from the video image to obtain the target vehicle image, so that the interference of other vehicles in the video image can be eliminated, the target characteristic point is directly extracted from the target vehicle image, and the motion position of the target characteristic point in the next frame of video image is calculated only on the basis of the target characteristic point, so that the calculated amount can be greatly reduced, the recognition and tracking efficiency is improved, and the target vehicle can be stably and accurately recognized and tracked under the multi-vehicle environment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that all or part of the processes of the above embodiments may be implemented by hardware related to instructions of a computer program, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the above embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Claims (10)

1. A vehicle identification and tracking method, comprising:
detecting a target vehicle from the obtained frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image;
extracting a vertex in the target vehicle image according to a vertex detection algorithm, and taking the vertex as a target feature point;
and calculating the motion position of the target feature point in the next frame of video image according to a target pixel instantaneous speed estimation algorithm.
2. The vehicle identification and tracking method according to claim 1, further comprising, before the step of detecting the target vehicle from the acquired one frame of video image according to the vehicle detection and identification algorithm to obtain the target vehicle image:
and acquiring an original video image, and performing gray processing on the original video image to obtain the video image.
3. The vehicle identification and tracking method according to claim 1, wherein the target vehicle is detected from the acquired one frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image, specifically:
traversing the video image by using a first preset window, and comparing RGB channel values of pixel points in a window coverage area in the video image with RGB channel values in a feature pool to obtain a comparison result;
and judging whether the window coverage area is a target vehicle area or not according to the comparison result, and if so, taking the image of the window coverage area as the target vehicle image.
4. The vehicle identification and tracking method according to claim 1, wherein the extracting the vertex in the target vehicle image according to a vertex detection algorithm and using the vertex as a target feature point specifically comprises:
traversing the target vehicle image by using a second preset window, and detecting pixel points of a window coverage area in the target vehicle image;
and comparing the gray difference values of the pixel points of the window coverage area before and after detection, and taking the corresponding pixel point as a vertex when the gray difference value is larger than a preset threshold value to obtain the target characteristic point.
5. The vehicle identification and tracking method according to claim 1, wherein the calculating the motion position of the target feature point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm comprises:
calculating a horizontal velocity component and a vertical velocity component of the target feature point according to a weighted least square method;
and calculating the motion position of the target characteristic point in the next frame of video image according to the horizontal velocity component and the vertical velocity component.
6. A vehicle identification and tracking device, comprising:
the target vehicle detection module is used for detecting a target vehicle from the acquired frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image;
the target feature point extraction module is used for extracting a vertex in the target vehicle image according to a vertex detection algorithm and taking the vertex as a target feature point;
and the motion position calculation module is used for calculating the motion position of the target characteristic point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm.
7. The vehicle identification and tracking device of claim 6, wherein the target vehicle detection module is further configured to obtain an original video image and perform gray processing on the original video image to obtain the video image before the target vehicle is detected from the obtained one frame of video image according to the vehicle detection and identification algorithm to obtain the target vehicle image.
8. The vehicle identification and tracking device of claim 6, wherein the target vehicle is detected from the acquired frame of video image according to a vehicle detection and identification algorithm to obtain a target vehicle image, specifically:
traversing the video image by using a first preset window, and comparing RGB channel values of pixel points in a window coverage area in the video image with RGB channel values in a feature pool to obtain a comparison result;
and judging whether the window coverage area is a target vehicle area or not according to the comparison result, and if so, taking the image of the window coverage area as the target vehicle image.
9. The vehicle identification and tracking device according to claim 6, wherein the extracting the vertex in the target vehicle image according to a vertex detection algorithm and using the vertex as a target feature point are specifically:
traversing the target vehicle image by using a second preset window, and detecting pixel points of a window coverage area in the target vehicle image;
and comparing the gray difference values of the pixel points of the window coverage area before and after detection, and taking the corresponding pixel point as a vertex when the gray difference value is larger than a preset threshold value to obtain the target characteristic point.
10. The vehicle identification and tracking device according to claim 6, wherein the calculating the motion position of the target feature point in the next frame of video image according to the target pixel instantaneous speed estimation algorithm comprises:
calculating a horizontal velocity component and a vertical velocity component of the target feature point according to a weighted least square method;
and calculating the motion position of the target characteristic point in the next frame of video image according to the horizontal velocity component and the vertical velocity component.
CN202010562609.3A 2020-06-18 2020-06-18 Vehicle identification and tracking method and device Pending CN111914627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010562609.3A CN111914627A (en) 2020-06-18 2020-06-18 Vehicle identification and tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010562609.3A CN111914627A (en) 2020-06-18 2020-06-18 Vehicle identification and tracking method and device

Publications (1)

Publication Number Publication Date
CN111914627A true CN111914627A (en) 2020-11-10

Family

ID=73237951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010562609.3A Pending CN111914627A (en) 2020-06-18 2020-06-18 Vehicle identification and tracking method and device

Country Status (1)

Country Link
CN (1) CN111914627A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860832A (en) * 2021-01-29 2021-05-28 广东电网有限责任公司 Cable display method, device, equipment and storage medium for three-dimensional map
CN112862888A (en) * 2021-01-29 2021-05-28 广东电网有限责任公司 Cable positioning method and device, computer equipment and storage medium
CN114613147A (en) * 2020-11-25 2022-06-10 浙江宇视科技有限公司 Vehicle violation identification method and device, medium and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318561A (en) * 2014-10-22 2015-01-28 上海理工大学 Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
CN106295459A (en) * 2015-05-11 2017-01-04 青岛若贝电子有限公司 Based on machine vision and the vehicle detection of cascade classifier and method for early warning
CN106875424A (en) * 2017-01-16 2017-06-20 西北工业大学 A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN107038411A (en) * 2017-02-26 2017-08-11 北京市交通运行监测调度中心 A kind of Roadside Parking behavior precise recognition method based on vehicle movement track in video
CN107766789A (en) * 2017-08-21 2018-03-06 浙江零跑科技有限公司 A kind of vehicle detection localization method based on vehicle-mounted monocular camera
CN107798688A (en) * 2017-10-31 2018-03-13 广州杰赛科技股份有限公司 Motion estimate method, method for early warning and automobile anti-rear end collision prior-warning device
CN108009494A (en) * 2017-11-30 2018-05-08 中山大学 A kind of intersection wireless vehicle tracking based on unmanned plane
CN108280847A (en) * 2018-01-18 2018-07-13 维森软件技术(上海)有限公司 A kind of vehicle movement track method of estimation
CN108364008A (en) * 2018-01-31 2018-08-03 成都中鼎科技有限公司 A kind of vehicle tracking system
CN109035287A (en) * 2018-07-02 2018-12-18 广州杰赛科技股份有限公司 Foreground image extraction method and device, moving vehicle recognition methods and device
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking
CN109190523A (en) * 2018-08-17 2019-01-11 武汉大学 A kind of automobile detecting following method for early warning of view-based access control model
CN109684996A (en) * 2018-12-22 2019-04-26 北京工业大学 Real-time vehicle based on video passes in and out recognition methods
CN109919027A (en) * 2019-01-30 2019-06-21 合肥特尔卡机器人科技股份有限公司 A kind of Feature Extraction System of road vehicles
CN110348363A (en) * 2019-07-05 2019-10-18 西安邮电大学 The vehicle tracking algorithm for eliminating similar vehicle interference is merged based on multiframe angle information

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318561A (en) * 2014-10-22 2015-01-28 上海理工大学 Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
CN106295459A (en) * 2015-05-11 2017-01-04 青岛若贝电子有限公司 Based on machine vision and the vehicle detection of cascade classifier and method for early warning
CN106875424A (en) * 2017-01-16 2017-06-20 西北工业大学 A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN107038411A (en) * 2017-02-26 2017-08-11 北京市交通运行监测调度中心 A kind of Roadside Parking behavior precise recognition method based on vehicle movement track in video
CN107766789A (en) * 2017-08-21 2018-03-06 浙江零跑科技有限公司 A kind of vehicle detection localization method based on vehicle-mounted monocular camera
CN107798688A (en) * 2017-10-31 2018-03-13 广州杰赛科技股份有限公司 Motion estimate method, method for early warning and automobile anti-rear end collision prior-warning device
CN108009494A (en) * 2017-11-30 2018-05-08 中山大学 A kind of intersection wireless vehicle tracking based on unmanned plane
CN108280847A (en) * 2018-01-18 2018-07-13 维森软件技术(上海)有限公司 A kind of vehicle movement track method of estimation
CN108364008A (en) * 2018-01-31 2018-08-03 成都中鼎科技有限公司 A kind of vehicle tracking system
CN109035287A (en) * 2018-07-02 2018-12-18 广州杰赛科技股份有限公司 Foreground image extraction method and device, moving vehicle recognition methods and device
CN109102523A (en) * 2018-07-13 2018-12-28 南京理工大学 A kind of moving object detection and tracking
CN109190523A (en) * 2018-08-17 2019-01-11 武汉大学 A kind of automobile detecting following method for early warning of view-based access control model
CN109684996A (en) * 2018-12-22 2019-04-26 北京工业大学 Real-time vehicle based on video passes in and out recognition methods
CN109919027A (en) * 2019-01-30 2019-06-21 合肥特尔卡机器人科技股份有限公司 A kind of Feature Extraction System of road vehicles
CN110348363A (en) * 2019-07-05 2019-10-18 西安邮电大学 The vehicle tracking algorithm for eliminating similar vehicle interference is merged based on multiframe angle information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114613147A (en) * 2020-11-25 2022-06-10 浙江宇视科技有限公司 Vehicle violation identification method and device, medium and electronic equipment
CN114613147B (en) * 2020-11-25 2023-08-04 浙江宇视科技有限公司 Vehicle violation identification method and device, medium and electronic equipment
CN112860832A (en) * 2021-01-29 2021-05-28 广东电网有限责任公司 Cable display method, device, equipment and storage medium for three-dimensional map
CN112862888A (en) * 2021-01-29 2021-05-28 广东电网有限责任公司 Cable positioning method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108960211B (en) Multi-target human body posture detection method and system
CN111709416B (en) License plate positioning method, device, system and storage medium
CN107452015B (en) Target tracking system with re-detection mechanism
CN111914627A (en) Vehicle identification and tracking method and device
US20150279021A1 (en) Video object tracking in traffic monitoring
CN102598057A (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
CN111738211B (en) PTZ camera moving object detection and recognition method based on dynamic background compensation and deep learning
CN115240130A (en) Pedestrian multi-target tracking method and device and computer readable storage medium
CN110400294B (en) Infrared target detection system and detection method
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
US20220366570A1 (en) Object tracking device and object tracking method
CN111639570B (en) Online multi-target tracking method based on motion model and single-target clue
CN110728700B (en) Moving target tracking method and device, computer equipment and storage medium
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN111062415B (en) Target object image extraction method and system based on contrast difference and storage medium
CN110348363B (en) Vehicle tracking method for eliminating similar vehicle interference based on multi-frame angle information fusion
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN108959355B (en) Ship classification method and device and electronic equipment
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN115439771A (en) Improved DSST infrared laser spot tracking method
Maki et al. Automatic ship identification in ISAR imagery: An on-line system using CMSM
Loza et al. Video object tracking with differential Structural SIMilarity index
CN111242980B (en) Point target-oriented infrared focal plane blind pixel dynamic detection method
Dong Faint moving small target detection based on optical flow method
Zhou et al. Speeded-up robust features based moving object detection on shaky video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination