CN109543610A - Vehicle detecting and tracking method, device, equipment and storage medium - Google Patents

Vehicle detecting and tracking method, device, equipment and storage medium Download PDF

Info

Publication number
CN109543610A
CN109543610A CN201811399560.3A CN201811399560A CN109543610A CN 109543610 A CN109543610 A CN 109543610A CN 201811399560 A CN201811399560 A CN 201811399560A CN 109543610 A CN109543610 A CN 109543610A
Authority
CN
China
Prior art keywords
vehicle
image
region
detection
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811399560.3A
Other languages
Chinese (zh)
Other versions
CN109543610B (en
Inventor
杨岳航
朱明�
郝志成
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201811399560.3A priority Critical patent/CN109543610B/en
Publication of CN109543610A publication Critical patent/CN109543610A/en
Application granted granted Critical
Publication of CN109543610B publication Critical patent/CN109543610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of vehicle detecting and tracking method, device, equipment and medium, method includes: to extract the Feature Descriptor of vehicle image to be tracked;Using include the training rear vehicle detection model of the typically seen section components feature of vehicle and the easy occlusion area component feature of vehicle detection area to be tested image in target vehicle tracking object;Extract the Feature Descriptor of target vehicle tracking object image;The matching of local sensitivity Hash is carried out to Feature Descriptor and is purified;It determines the number of Feature Descriptor after purifying and judges whether to be greater than preset threshold;If so, being tracked when the image value of target vehicle tracking object is within the scope of default tone value to target vehicle tracking object.Namely, the present invention carries out image detection using comprising the model of typically seen region and easy occlusion area, and goes out vehicle to be tracked by local sensitivity Hash matching detection, realizes vehicle tracking, it effectively prevents improving tracking accuracy due to blocking the problem of leading to vehicle leak detection.

Description

Vehicle detection tracking method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for vehicle detection and tracking.
Background
The vehicle detection and tracking are key components in an intelligent traffic system, have wide application prospects in the fields of traffic dispersion, auxiliary driving systems, road monitoring and the like, and can provide important clues and evidences for public security cases and traffic accident investigation. However, due to the complex imaging conditions in real scenes, vehicle detection and tracking have many difficulties, wherein the occlusion problem is particularly prominent. The existence of multiple targets in a complex road environment is a main reason for mutual shielding among vehicles, and target information is lost due to shielding, so that target missing and tracking loss are easily caused.
Disclosure of Invention
In view of the above, the present invention provides a vehicle detecting and tracking method, device, apparatus and storage medium, which can solve the tracking missing problem caused by the loss of vehicle visual information. The specific scheme is as follows:
in a first aspect, the invention discloses a vehicle detection and tracking method, which comprises the following steps:
determining a vehicle image to be tracked, and extracting a first feature descriptor of the vehicle image to be tracked;
acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects;
carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors;
determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value;
if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object;
and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected.
Optionally, the determining an image of the vehicle to be tracked includes:
and selecting the image of the vehicle to be tracked from a database of vehicles to be tracked.
Optionally, the acquiring the images of the to-be-detected region arranged in time sequence includes:
and acquiring a video to be processed, and sampling according to a time sequence to acquire the image of the area to be detected.
Optionally, the method further includes:
acquiring a preset vehicle detection model;
and training the preset vehicle detection model by using the image training sample, and learning the scale and position relation among the vehicle components in the preset vehicle detection model to obtain the trained vehicle detection model containing the component characteristics of the normally visible region of the vehicle and the component characteristics of the region easily blocked by the vehicle.
Optionally, the obtaining of the preset vehicle detection model includes:
dividing a vehicle object into a vehicle normally visible area and a vehicle easily-sheltered area;
and constructing a preset vehicle detection model comprising a vehicle normally visible region component and a vehicle easily-sheltered region component by utilizing a mixed image template comprising different types of features.
Optionally, the training of the preset vehicle detection model by using the image training sample, and the learning of the scale and position relationship between vehicle components in the preset vehicle detection model include:
generating a characteristic response matrix by using the image training sample; wherein each row in the feature response matrix characterizes a feature response vector of one of the image training samples;
selecting the features of which the response values in all the image training samples in the feature response matrix are larger than a preset threshold value to obtain a large feature response area;
calculating the region scores of all the large feature response regions according to a preset score calculation formula; wherein, the preset score calculation formula is as follows:
wherein, BkRepresents the kth large feature response region, rows () represents the positive examples contained in the large feature response region, cols () represents the features contained in the large feature response region, βk,jRepresents the weight corresponding to the primitive j in the mixed image template in the kth large-characteristic response area, Ri,jRepresents the characteristic response value, z, corresponding to the ith row and the jth columnk,jPresentation and βk,jCorresponding to an independent standard constant, Score (B)k) A score value representing the large feature response region;
determining a target response region with the score larger than a region score threshold value in all the large feature response regions according to the region score;
and learning the scale and position relation among the vehicle parts by using all the target response areas.
Optionally, the detecting all vehicle tracking objects in the current frame to-be-detected region image by using the trained vehicle detection model includes:
s11: performing template matching on the current frame to-be-detected region image based on a mixed image template in the trained vehicle detection model to obtain all vehicle candidates in the current frame to-be-detected region image, and determining detection scores of all the vehicle candidates;
s12: determining the optimal vehicle candidate with the highest detection score in all the vehicle candidates, judging whether the detection score of the optimal vehicle candidate is larger than a detection score threshold value, and if so, entering S13; if not, the detection is finished;
s13: determining that the optimal vehicle candidate is a vehicle tracking object, recording the position, the direction and the scale of the optimal vehicle candidate, removing the optimal vehicle candidate from the current frame to-be-detected region image, taking the image from which the optimal vehicle candidate is removed as the current frame to-be-detected region image, and entering S11.
Optionally, the determining the detection scores of all the vehicle candidates includes:
filtering by changing the position, the direction and the scale of a mixed image template corresponding to a target vehicle component in the trained vehicle detection model, and obtaining all scores of the target vehicle component according to a component score calculation formula; wherein the component score calculation formula is:
wherein (x)j,yj,oj,sj) Representing the position (x) of the target vehicle component as a function of the templatej,yj) Changing the direction ojTransforming the scale sj,τx,y,o,s(xj,yj,oj,sj) (x) representing corresponding features in the hybrid image template of the target vehicle componentj,yj,oj,sj) MAX _ RESPONSE () represents a local area maximum eigen RESPONSE value vector, βk,jRepresents the weight, Z, corresponding to the primitive j in the mixed image template in the kth large-feature response areak,jPresentation and βk,jCorresponding independent standard constant, SUM _ LPARk() A score representing the target vehicle component;
generating component score vectors by using all the scores, and constructing a region detection model by using all the component score vectors;
changing the position, the direction and the scale on the image of the current frame to be detected region through the region detection model, calculating according to a region detection formula to obtain a region detection score, and determining the detection score of the current vehicle candidate; wherein the region detection formula is:
wherein,representing a region detection model, rgRepresents a score vector of the g-th target vehicle component, and SUM _ DETECT () represents a region detection score.
Optionally, after the trained vehicle detection model is used to detect all vehicle tracking objects in the current frame image of the region to be detected, the method further includes:
and carrying out gray processing on the image of the current frame to be detected region, and carrying out rapid median filtering on the processed image.
Optionally, the performing the graying processing on the image of the current frame to be detected includes:
determining the format of the current frame to-be-detected region image;
when the current frame to-be-detected region image is in a YUV format, directly extracting a value of a Y component in the current frame to-be-detected region image, and taking the value of the Y component as a gray value;
when the image of the current frame to-be-detected area is in an RGB format, calculating the gray values of all pixels in the image of the current frame to-be-detected area by using a preset gray formula; wherein the preset graying formula is as follows:
GrayValue=(306×R+601×G+117×B)>>10;
wherein, R represents the pixel value of the R channel in the current frame to-be-detected region image, G represents the pixel value of the G channel in the current frame to-be-detected region image, B represents the pixel value of the B channel in the current frame to-be-detected region image, and GrayValue represents the gray value.
Optionally, the purifying the matched feature descriptors includes:
and purifying the matched feature descriptors by using consistency constraint of adjacent feature points.
Optionally, the method further includes:
determining the tone value of the normally visible area of the vehicle and the tone value of the easily sheltered area of the vehicle in the image of the vehicle to be tracked;
determining a preset hue value range corresponding to each area according to a range determination formula by using the hue value of the normally visible area of the vehicle and the hue value of the easily-sheltered area of the vehicle; wherein the range determination formula is:
D1∈[T1-10°,T1+10°],
D2∈[T2-10°,T2+10°];
where T1 denotes a hue value of the vehicle normally visible region, T2 denotes a hue value of the vehicle easy-to-shield region, D1 denotes a preset hue value range of the vehicle normally visible region, and D2 denotes a preset hue value range of the vehicle easy-to-shield region.
In a second aspect, the present invention discloses a vehicle detecting and tracking device, comprising:
the system comprises a first extraction module, a second extraction module and a third extraction module, wherein the first extraction module is used for determining a vehicle image to be tracked and extracting a first feature descriptor of the vehicle image to be tracked;
the first detection module is used for acquiring images of the area to be detected which are arranged according to a time sequence and detecting all vehicle tracking objects in the image of the current frame area to be detected by using the trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
the second extraction module is used for extracting a second feature descriptor of an image area corresponding to the target vehicle tracking object in all the vehicle tracking objects;
the characteristic matching module is used for carrying out local sensitive Hash matching on the first characteristic descriptor and the second characteristic descriptor and purifying the matched characteristic descriptors;
the number judgment module is used for determining the number of the purified feature descriptors and judging whether the number is greater than a preset threshold value;
the vehicle tracking module is used for tracking by using the position, the direction and the scale of the target vehicle tracking object when the regional tone value in the image region corresponding to the target vehicle tracking object is within the preset tone value range if the number is larger than the preset threshold value;
and the second detection module is used for detecting all vehicle tracking objects in the next frame of image of the area to be detected if the number is less than the preset threshold value.
In a third aspect, the present invention discloses a vehicle detection and tracking device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the vehicle detection and tracking method disclosed in the foregoing when executing the computer program.
In a fourth aspect, the present invention discloses a computer readable storage medium for storing a computer program, which when executed by a processor, performs the steps of the vehicle detection and tracking method disclosed above.
Therefore, the vehicle image to be tracked is determined, and the first feature descriptor of the vehicle image to be tracked is extracted; acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle; extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects; carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors; determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value; if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object; and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected. That is, the invention detects the image of the area to be detected by using the model containing the common visible area and the easily-shielded area, and further detects the vehicle to be tracked in the area to be detected by local sensitive hash matching, thereby realizing vehicle tracking, effectively avoiding the problem of vehicle detection omission caused by shielding and improving the tracking accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of one embodiment of a vehicle detection and tracking method provided by the present invention;
fig. 2a to fig. 2c are schematic diagrams illustrating consistency constraints of included angles of neighboring feature points in an embodiment of a vehicle detection and tracking method provided by the present invention;
FIG. 3 is a flowchart of a vehicle detection model training process in an embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 4 is a flowchart of a model building process in one embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 5 is a schematic diagram of vehicle component features in one embodiment of a vehicle detection and tracking method provided by the present invention;
FIG. 6 is a flow chart of a pre-defined vehicle detection model training process in an embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 7 is a schematic diagram of a large signature response region in one embodiment of a vehicle detection and tracking method provided by the present invention;
FIG. 8 is a diagram of a topology of a component model in an embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 9 is a flowchart illustrating a method for detecting an image of an area to be detected according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating a graying process for an image according to an embodiment of the vehicle detecting and tracking method provided by the present invention;
FIG. 11 is a flowchart of determining a vehicle candidate score according to one embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 12 is a flowchart of a preset hue value range determination process in one embodiment of the vehicle detection and tracking method provided by the present invention;
FIG. 13 is a block diagram of a vehicle detection and tracking device provided by the present invention;
FIG. 14 is a flow chart of a vehicle detection and tracking process in the vehicle detection and tracking device provided by the present invention;
fig. 15 is a schematic diagram of a hardware structure of the vehicle detection and tracking device provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Due to the complex imaging conditions in real scenes, vehicle detection and tracking face a lot of difficulties, wherein the problem of occlusion is particularly prominent. The existence of multiple targets in a complex road environment is a main reason for mutual shielding among vehicles, and target information is lost due to shielding, so that target missing and tracking loss are easily caused.
The embodiment of the invention discloses a vehicle detection and tracking method, which is shown in figure 1 and comprises the following steps:
step S101: determining a vehicle image to be tracked, and extracting a first feature descriptor of the vehicle image to be tracked;
in this embodiment, an image of a vehicle to be tracked is first determined. Specifically, the embodiment selects the vehicle image to be tracked from the vehicle database to be tracked, and extracts the first feature descriptor of the vehicle image to be tracked. Preferably, the method for extracting the ORB feature descriptors of the image has a fast extraction speed, and further improves the detection rate.
Step S102: acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
specifically, in this embodiment, a video to be processed is first acquired, and sampling is performed according to a time sequence to acquire the image of the region to be detected. For example, this embodiment samples every 5 seconds from the video to be processed, and acquires sequence images arranged in time sequence.
Further, the embodiment detects all vehicle tracking objects in the current frame to-be-detected region image by using the trained vehicle detection model. The trained vehicle model comprises the component characteristics of a normally visible area and an easily-shielded area of the vehicle, and the trained vehicle model is used for detecting the vehicle tracking object, so that the problem of detection leakage caused by mutual shielding between vehicles can be solved.
Step S103: extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects;
it is understood that the target vehicle tracking object is determined among all the vehicle tracking objects, and the second feature descriptor of the image area corresponding to the target vehicle tracking object is extracted.
Step S104: carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors;
in this embodiment, the first feature descriptor and the second feature descriptor are subjected to locality sensitive hash matching, and further the matched feature descriptors are purified by using consistency constraint of neighboring feature points, as shown in fig. 2a to 2c, fig. 2a shows an included angle of neighboring feature points of a template image, fig. 2b shows an included angle of neighboring feature points of an image to be matched, fig. 3 shows a matched feature point pair included angle, and mismatched feature descriptors can be filtered out through purification processing.
Step S105: determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value;
step S106: if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object;
it should be noted that, after the matching and the purification, the number of the feature descriptors successfully matched is determined, and whether the number of the feature descriptors is greater than a preset threshold value is judged. And if the number of the successfully matched feature descriptors is larger than a preset threshold value, indicating that the currently matched target vehicle tracking object and the vehicle to be tracked are the same vehicle.
And further judging whether the hue value of the area in the image area corresponding to the target vehicle tracking object is within a preset hue value range, if so, indicating that the target vehicle tracking object and the vehicle to be tracked are the same vehicle, and tracking by using the position, the direction and the scale of the target vehicle tracking object. If not, the target vehicle tracking object is not the vehicle to be tracked, and the remaining tracking objects except the target vehicle tracking object in all the vehicle tracking objects in the current frame area image to be detected are further matched.
Step S107: and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected.
It can be understood that if the number of feature descriptors successfully matched is less than the preset threshold, all vehicle tracking objects in the area to be detected in the next frame are continuously detected, and the positions of the vehicles to be tracked are further determined in a matching manner, so that vehicle tracking is realized.
Therefore, the vehicle image to be tracked is determined, and the first feature descriptor of the vehicle image to be tracked is extracted; acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle; extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects; carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors; determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value; if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object; and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected. That is, the invention detects the image of the area to be detected by using the model containing the common visible area and the easily-shielded area, and further detects the vehicle to be tracked in the area to be detected by local sensitive hash matching, thereby realizing vehicle tracking, effectively avoiding the problem of vehicle detection omission caused by shielding and improving the tracking accuracy.
In an embodiment of the vehicle detection and tracking method provided by the present invention, a training process for a vehicle detection model is further described, as shown in fig. 3, the training process specifically includes:
step S201: acquiring a preset vehicle detection model;
in this embodiment, a preset vehicle detection model is first obtained.
Further, the embodiment further explains a process of constructing the preset vehicle prediction model, as shown in fig. 4, the process includes:
step S2011: dividing a vehicle object into a vehicle normally visible area and a vehicle easily-sheltered area;
it will be appreciated that the license plate and headlight areas of a vehicle typically have rich visual information, but in a replicated traffic environment, the areas are typically occluded, dividing the area into easily-occluded areas. In contrast to the license plate area, the roof area and the front windshield are generally visible, and even in the case of traffic congestion, this area can be seen despite heavy obstruction between vehicles, and thus the area is divided into generally visible areas.
Step S2012: and constructing a preset vehicle detection model comprising a vehicle normally visible region component and a vehicle easily-sheltered region component by utilizing a mixed image template comprising different types of features.
In this embodiment, after dividing the vehicle object into two regions, modeling is performed by using a mixed image template containing different types of features, where the different types of features may include: edge, texture, smoothness. A representation of the features of the modeled vehicle components is shown in fig. 5.
Step S202: and training the preset vehicle detection model by using the image training sample, and learning the scale and position relation among the vehicle components in the preset vehicle detection model to obtain the trained vehicle detection model containing the component characteristics of the normally visible region of the vehicle and the component characteristics of the region easily blocked by the vehicle.
Specifically, the training process of the preset vehicle detection model is further explained, and as shown in fig. 6, the training process includes:
step S2021: generating a characteristic response matrix by using the image training sample; wherein each row in the feature response matrix characterizes a feature response vector of one of the image training samples;
specifically, the number of rows of the characteristic response matrix is the number of image training samples, and each row in the matrix represents a characteristic response vector of one image training sample. Each value in each row is a feature response value calculated from the euclidean distance between the image block and the feature and normalized to between 0 and 1, representing the likelihood that the prototype of the feature appears in the image.
Step S2022: selecting the features of which the response values in all the image training samples in the feature response matrix are larger than a preset threshold value to obtain a large feature response area;
in this embodiment, as shown in fig. 7, the eigen regions having high response values that are commonly owned in the sample are selected from the eigen response matrix to form a large eigen response region.
Step S2023: calculating the region scores of all the large feature response regions according to a preset score calculation formula; wherein, the preset score calculation formula is as follows:
wherein, BkRepresents the kth large feature response region, rows () represents the positive examples contained in the large feature response region, cols () represents the features contained in the large feature response region, βk,jRepresents the weight corresponding to the primitive j in the mixed image template in the kth large-characteristic response area, Ri,jRepresents the characteristic response value, z, corresponding to the ith row and the jth columnk,jPresentation and βk,jCorresponding to an independent standard constant, Score (B)k) A score value representing the large feature response region;
step S2024: determining a target response region with the score larger than a region score threshold value in all the large feature response regions according to the region score;
it will be appreciated that all of the large feature response regions are ranked according to the region score, with regions having lower scores being discarded, or regions having scores below a region score threshold being discarded.
Step S2025: and learning the scale and position relation among the vehicle parts by using all the target response areas.
Further, by utilizing the target response region, the organizational structure of the model is reconstructed through map compression and sharing termination points, and the geometric structure of the component model is obtained through the scale and position relation between each image block and each component and through rotation transformation. Referring to fig. 8, fig. 8 shows the topology of the model.
In a specific embodiment of the vehicle detection and tracking method provided by the present invention, a process for detecting all vehicle tracking objects in the current frame to-be-detected region image by using the trained vehicle detection model is further described, as shown in fig. 9, the process specifically includes:
s11: performing template matching on the current frame to-be-detected region image based on a mixed image template in the trained vehicle detection model to obtain all vehicle candidates in the current frame to-be-detected region image, and determining detection scores of all the vehicle candidates;
in this embodiment, before detecting all vehicle tracking objects in the current frame of image of the area to be detected by using the trained vehicle detection model, the image of the current frame of the area to be detected is grayed, and the processed image is subjected to fast median filtering, so as to maintain the edge and filter noise interference.
Specifically, referring to fig. 10, the process of performing graying processing on the image of the current frame to be detected includes:
step S111: determining the format of the current frame to-be-detected region image;
step S112: when the current frame to-be-detected region image is in a YUV format, directly extracting a value of a Y component in the current frame to-be-detected region image, and taking the value of the Y component as a gray value;
step S113: when the image of the current frame to-be-detected area is in an RGB format, calculating the gray values of all pixels in the image of the current frame to-be-detected area by using a preset gray formula; wherein the preset graying formula is as follows:
GrayValue=(306×R+601×G+117×B)>>10;
wherein, R represents the pixel value of the R channel in the current frame to-be-detected region image, G represents the pixel value of the G channel in the current frame to-be-detected region image, B represents the pixel value of the B channel in the current frame to-be-detected region image, and GrayValue represents the gray value.
In this embodiment, a Gabor filter is first used to filter the current frame to-be-detected region image to obtain an edge image with target direction features, and further based on the mixed image template in the trained vehicle detection model, filtering is performed by locally changing the position, direction, and scale of the mixed image template corresponding to the component under the model to obtain a score corresponding to the component, and an optimal region detection model is constructed according to the component score. And further converting the position, the direction and the scale on the image by using the region detection model to obtain the detection score of the current vehicle candidate.
S12: determining the optimal vehicle candidate with the highest detection score in all the vehicle candidates, judging whether the detection score of the optimal vehicle candidate is larger than a detection score threshold value, and if so, entering S13; if not, the detection is finished;
in this embodiment, the vehicle with the highest score among all the vehicle candidate scores is determined as the optimal vehicle candidate, and it is further determined whether the current detection score is greater than the detection score threshold. If so, go to step S13; if not, the detection is finished.
S13: determining that the optimal vehicle candidate is a vehicle tracking object, recording the position, the direction and the scale of the optimal vehicle candidate, removing the optimal vehicle candidate from the current frame to-be-detected region image, taking the image from which the optimal vehicle candidate is removed as the current frame to-be-detected region image, and entering S11.
It is understood that the optimal vehicle candidate is determined as the vehicle tracking object, and the position, direction and scale of the current candidate are recorded to enable tracking of the current optimal vehicle candidate using the position, direction and scale. And further removing the optimal vehicle candidate from the current frame to-be-detected region image, and iteratively detecting all vehicle tracking objects in the current frame to-be-detected region image.
In one embodiment of the vehicle detection and tracking method provided by the present invention, the process for determining the detection scores of all the vehicle candidates is further described, as shown in fig. 11, the process specifically includes:
step S401: filtering by changing the position, the direction and the scale of a mixed image template corresponding to a target vehicle component in the trained vehicle detection model, and obtaining all scores of the target vehicle component according to a component score calculation formula; wherein the component score calculation formula is:
wherein (x)j,yj,oj,sj) Representing the position (x) of the target vehicle component as a function of the templatej,yj) Changing the direction ojTransforming the scale sj,τx,y,o,s(xj,yj,oj,sj) (x) representing corresponding features in the hybrid image template of the target vehicle componentj,yj,oj,sj) MAX _ RESPONSE () represents a local area maximum eigen RESPONSE value vector, βk,jRepresents the weight, Z, corresponding to the primitive j in the mixed image template in the kth large-feature response areak,jPresentation and βk,jCorresponding independent standard constant, SUM _ LPARk() A score representing the target vehicle component;
step S402: generating component score vectors by using all the scores, and constructing a region detection model by using all the component score vectors;
specifically, the score vectors corresponding to all the score collection and generation components are collected, and further inferred to obtain the region detection model.
Step S403: changing the position, the direction and the scale on the image of the current frame to be detected region through the region detection model, calculating according to a region detection formula to obtain a region detection score, and determining the detection score of the current vehicle candidate; wherein the region detection formula is:
wherein,representing a region detection model, rgRepresents a score vector of the g-th target vehicle component, and SUM _ DETECT () represents a region detection score.
In this embodiment, the position, the direction, and the scale of the region detection model are transformed on the current frame to-be-detected region image, so as to obtain a region detection score. And further calculating a global highest score according to the region detection score to obtain the detection score of the current vehicle candidate.
In an embodiment of the vehicle detection and tracking method provided by the present invention, a process for determining the preset hue value range is further described, and as shown in fig. 12, the process specifically includes:
step S501: determining the tone value of the normally visible area of the vehicle and the tone value of the easily sheltered area of the vehicle in the image of the vehicle to be tracked;
specifically, the present embodiment performs histogram statistics of hue values on the vehicle normally visible region and the vehicle easy-to-occlude region, and takes a peak value of the histogram as a hue value of the current region.
Step S502: determining a preset hue value range corresponding to each area according to a range determination formula by using the hue value of the normally visible area of the vehicle and the hue value of the easily-sheltered area of the vehicle; wherein the range determination formula is:
D1∈[T1-10°,T1+10°],
D2∈[T2-10°,T2+10°];
where T1 denotes a hue value of the vehicle normally visible region, T2 denotes a hue value of the vehicle easy-to-shield region, D1 denotes a preset hue value range of the vehicle normally visible region, and D2 denotes a preset hue value range of the vehicle easy-to-shield region.
In the following, the vehicle detecting and tracking device provided by the embodiment of the present invention is described, and the vehicle detecting and tracking device described below and the vehicle detecting and tracking method described above may be referred to correspondingly.
Fig. 13 is a block diagram of a vehicle detecting and tracking device according to an embodiment of the present invention, and referring to fig. 13, the vehicle detecting and tracking device may include:
the first extraction module 100 is configured to determine a vehicle image to be tracked, and extract a first feature descriptor of the vehicle image to be tracked;
the first detection module 200 is configured to acquire images of a to-be-detected region arranged in a time sequence, and detect all vehicle tracking objects in the current frame image of the to-be-detected region by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
the second extraction module 300 is configured to extract a second feature descriptor of an image region corresponding to a target vehicle tracking object from among all the vehicle tracking objects;
a feature matching module 400, configured to perform locality sensitive hash matching on the first feature descriptor and the second feature descriptor, and purify the matched feature descriptors;
a number judgment module 500, configured to determine the number of the purified feature descriptors and judge whether the number is greater than a preset threshold;
the vehicle tracking module 600 is configured to, if the number is greater than a preset threshold, perform tracking by using a position, a direction, and a scale of the target vehicle tracking object when a hue value of an area in an image area corresponding to the target vehicle tracking object is within a preset hue value range;
the second detecting module 700 is configured to detect all vehicle tracking objects in the next frame of image of the area to be detected if the number is smaller than the preset threshold.
The vehicle detecting and tracking device of this embodiment is used to implement the vehicle detecting and tracking method, so the specific implementation of the vehicle detecting and tracking device can be found in the foregoing embodiment of the vehicle detecting and tracking method, and will not be described again here.
Further, the embodiment of the present invention also discloses a vehicle detection and tracking device, which includes a memory 11 and a processor 12, wherein the memory 11 is used for storing a computer program, and the processor 12 is used for implementing the steps of the vehicle detection and tracking method disclosed in the foregoing when executing the computer program.
The process of detecting and tracking a vehicle by the vehicle detecting and tracking device in this embodiment is shown in fig. 14, and includes firstly completing training of a component model before detection, acquiring a vehicle image to be tracked, preprocessing an area image to be tracked, inputting the preprocessed area image to be tracked into the component model for detection, extracting ORB feature descriptors of the vehicle image to be tracked and the area image to be tracked, further performing local sensitive hash matching, and purifying the matched descriptors. And judging whether the matched and purified feature descriptors reach the threshold quantity, if so, comparing the regional hue values of the vehicle image to be tracked and the regional image to be detected, if so, successfully tracking, and continuously detecting the next frame of image.
Further, referring to fig. 15, the vehicle detecting and tracking device in the present embodiment may further include:
the input interface 13 is configured to obtain a computer program imported from the outside, store the obtained computer program in the memory 12, and also be configured to obtain various instructions and parameters transmitted by an external terminal device, and transmit the instructions and parameters to the processor 11, so that the processor 11 performs corresponding processing by using the instructions and parameters. In this embodiment, the input interface 13 may specifically include, but is not limited to, a USB interface, a serial interface, a voice input interface, a fingerprint input interface, a hard disk reading interface, and the like.
And an output interface 14, configured to output various data generated by the processor 11 to a terminal device connected thereto, so that other terminal devices connected to the output interface 14 can acquire various data generated by the processor 11. In this embodiment, the output interface 14 may specifically include, but is not limited to, a USB interface, a serial interface, and the like.
And a display unit 15 for displaying the data sent by the processor 11.
The communication unit 16 is configured to establish a remote communication connection with an external server, acquire data sent by an external terminal, and send the data to the processor 11 for processing and analysis, and in addition, the processor 11 may also send various results obtained after processing to preset various data receiving ends through the communication unit 16. In this embodiment, the communication technology adopted by the communication unit 16 may be a wired communication technology or a wireless communication technology, such as a Universal Serial Bus (USB), a wireless fidelity technology (WiFi), a bluetooth communication technology, a low-power bluetooth communication technology (BLE), and the like. Additionally, the communication unit 16 may embody a cellular radio transceiver operating in accordance with wideband code division multiple access (W-CDMA), Long Term Evolution (LTE), and similar standards.
Further, the embodiment of the present invention also discloses a computer readable storage medium, which is used for storing a computer program, and the computer program is executed by a processor to execute the steps of the vehicle detection and tracking method disclosed in the foregoing.
According to the method, the model containing the common visible area and the easily-shielded area is used for detecting the image of the area to be detected, and the vehicle to be tracked in the area to be detected is further detected through local sensitive Hash matching, so that vehicle tracking is realized, the problem of vehicle detection omission caused by shielding is effectively avoided, and the tracking accuracy is improved.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The vehicle detecting and tracking method, device, equipment and storage medium provided by the invention are described in detail above, and the principle and the implementation of the invention are explained in the present document by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. A vehicle detection tracking method, comprising:
determining a vehicle image to be tracked, and extracting a first feature descriptor of the vehicle image to be tracked;
acquiring images of a region to be detected which are arranged according to a time sequence, and detecting all vehicle tracking objects in the image of the current region to be detected by using a trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
extracting a second feature descriptor of an image area corresponding to a target vehicle tracking object in all the vehicle tracking objects;
carrying out local sensitive Hash matching on the first feature descriptor and the second feature descriptor, and purifying the matched feature descriptors;
determining the number of the purified feature descriptors, and judging whether the number is greater than a preset threshold value;
if so, when the regional tone value in the image region corresponding to the target vehicle tracking object is within a preset tone value range, tracking by using the position, the direction and the scale of the target vehicle tracking object;
and if not, detecting all vehicle tracking objects in the next frame of image of the area to be detected.
2. The vehicle detection and tracking method according to claim 1, wherein the determining an image of the vehicle to be tracked comprises:
and selecting the image of the vehicle to be tracked from a database of vehicles to be tracked.
3. The vehicle detection and tracking method according to claim 1, wherein the acquiring images of the region to be detected arranged in time sequence comprises:
and acquiring a video to be processed, and sampling according to a time sequence to acquire the image of the area to be detected.
4. The vehicle detection and tracking method according to claim 1, further comprising:
acquiring a preset vehicle detection model;
and training the preset vehicle detection model by using the image training sample, and learning the scale and position relation among the vehicle components in the preset vehicle detection model to obtain the trained vehicle detection model containing the component characteristics of the normally visible region of the vehicle and the component characteristics of the region easily blocked by the vehicle.
5. The vehicle detection and tracking method according to claim 4, wherein the obtaining of the preset vehicle detection model comprises:
dividing a vehicle object into a vehicle normally visible area and a vehicle easily-sheltered area;
and constructing a preset vehicle detection model comprising a vehicle normally visible region component and a vehicle easily-sheltered region component by utilizing a mixed image template comprising different types of features.
6. The vehicle detection and tracking method according to claim 4, wherein the training of the preset vehicle detection model by using the image training samples and the learning of the scale and position relationship between vehicle components in the preset vehicle detection model comprises:
generating a characteristic response matrix by using the image training sample; wherein each row in the feature response matrix characterizes a feature response vector of one of the image training samples;
selecting the features of which the response values in all the image training samples in the feature response matrix are larger than a preset threshold value to obtain a large feature response area;
calculating the region scores of all the large feature response regions according to a preset score calculation formula; wherein, the preset score calculation formula is as follows:
wherein, BkRepresents the kth large feature response region, rows () represents the positive examples contained in the large feature response region, cols () represents the features contained in the large feature response region, βk,jRepresents the weight corresponding to the primitive j in the mixed image template in the kth large-characteristic response area, Ri,jRepresents the characteristic response value, z, corresponding to the ith row and the jth columnk,jPresentation and βk,jCorresponding to an independent standard constant, Score (B)k) Represents the big specialCharacterizing a score value of the response region;
determining a target response region with the score larger than a region score threshold value in all the large feature response regions according to the region score;
and learning the scale and position relation among the vehicle parts by using all the target response areas.
7. The vehicle detection and tracking method according to claim 1, wherein the detecting all vehicle tracking objects in the current frame image of the area to be detected by using the trained vehicle detection model comprises:
s11: performing template matching on the current frame to-be-detected region image based on a mixed image template in the trained vehicle detection model to obtain all vehicle candidates in the current frame to-be-detected region image, and determining detection scores of all the vehicle candidates;
s12: determining the optimal vehicle candidate with the highest detection score in all the vehicle candidates, judging whether the detection score of the optimal vehicle candidate is larger than a detection score threshold value, and if so, entering S13; if not, the detection is finished;
s13: determining that the optimal vehicle candidate is a vehicle tracking object, recording the position, the direction and the scale of the optimal vehicle candidate, removing the optimal vehicle candidate from the current frame to-be-detected region image, taking the image from which the optimal vehicle candidate is removed as the current frame to-be-detected region image, and entering S11.
8. The vehicle detection tracking method of claim 7, wherein the determining the detection scores for all vehicle candidates comprises:
filtering by changing the position, the direction and the scale of a mixed image template corresponding to a target vehicle component in the trained vehicle detection model, and obtaining all scores of the target vehicle component according to a component score calculation formula; wherein the component score calculation formula is:
wherein (x)j,yj,oj,sj) Representing the position (x) of the target vehicle component as a function of the templatej,yj) Changing the direction ojTransforming the scale sj,τx,y,o,s(xj,yj,oj,sj) (x) representing corresponding features in the hybrid image template of the target vehicle componentj,yj,oj,sj) MAX _ RESPONSE () represents a local area maximum eigen RESPONSE value vector, βk,jRepresents the weight, Z, corresponding to the primitive j in the mixed image template in the kth large-feature response areak,jPresentation and βk,jCorresponding independent standard constant, SUM _ LPARk() A score representing the target vehicle component;
generating component score vectors by using all the scores, and constructing a region detection model by using all the component score vectors;
changing the position, the direction and the scale on the image of the current frame to be detected region through the region detection model, calculating according to a region detection formula to obtain a region detection score, and determining the detection score of the current vehicle candidate; wherein the region detection formula is:
wherein,representing a region detection model, rgRepresents a score vector of the g-th target vehicle component, and SUM _ DETECT () represents a region detection score.
9. The vehicle detecting and tracking method according to claim 1, wherein after detecting all vehicle tracking objects in the current frame image of the region to be detected by using the trained vehicle detection model, the method further comprises:
and carrying out gray processing on the image of the current frame to be detected region, and carrying out rapid median filtering on the processed image.
10. The vehicle detecting and tracking method according to claim 9, wherein the graying the image of the current frame to be detected includes:
determining the format of the current frame to-be-detected region image;
when the current frame to-be-detected region image is in a YUV format, directly extracting a value of a Y component in the current frame to-be-detected region image, and taking the value of the Y component as a gray value;
when the image of the current frame to-be-detected area is in an RGB format, calculating the gray values of all pixels in the image of the current frame to-be-detected area by using a preset gray formula; wherein the preset graying formula is as follows:
GrayValue=(306×R+601×G+117×B)>>10;
wherein, R represents the pixel value of the R channel in the current frame to-be-detected region image, G represents the pixel value of the G channel in the current frame to-be-detected region image, B represents the pixel value of the B channel in the current frame to-be-detected region image, and GrayValue represents the gray value.
11. The vehicle detection and tracking method of claim 1, wherein the refining the matched feature descriptors comprises:
and purifying the matched feature descriptors by using consistency constraint of adjacent feature points.
12. The vehicle detection and tracking method according to any one of claims 1 to 11, characterized by further comprising:
determining the tone value of the normally visible area of the vehicle and the tone value of the easily sheltered area of the vehicle in the image of the vehicle to be tracked;
determining a preset hue value range corresponding to each area according to a range determination formula by using the hue value of the normally visible area of the vehicle and the hue value of the easily-sheltered area of the vehicle; wherein the range determination formula is:
D1∈[T1-10°,T1+10°],
D2∈[T2-10°,T2+10°];
where T1 denotes a hue value of the vehicle normally visible region, T2 denotes a hue value of the vehicle easy-to-shield region, D1 denotes a preset hue value range of the vehicle normally visible region, and D2 denotes a preset hue value range of the vehicle easy-to-shield region.
13. A vehicle detection and tracking device, comprising:
the system comprises a first extraction module, a second extraction module and a third extraction module, wherein the first extraction module is used for determining a vehicle image to be tracked and extracting a first feature descriptor of the vehicle image to be tracked;
the first detection module is used for acquiring images of the area to be detected which are arranged according to a time sequence and detecting all vehicle tracking objects in the image of the current frame area to be detected by using the trained vehicle detection model; the trained vehicle detection model is a model containing the component characteristics of a normally visible region of the vehicle and the component characteristics of an easily-sheltered region of the vehicle;
the second extraction module is used for extracting a second feature descriptor of an image area corresponding to the target vehicle tracking object in all the vehicle tracking objects;
the characteristic matching module is used for carrying out local sensitive Hash matching on the first characteristic descriptor and the second characteristic descriptor and purifying the matched characteristic descriptors;
the number judgment module is used for determining the number of the purified feature descriptors and judging whether the number is greater than a preset threshold value;
the vehicle tracking module is used for tracking by using the position, the direction and the scale of the target vehicle tracking object when the regional tone value in the image region corresponding to the target vehicle tracking object is within the preset tone value range if the number is larger than the preset threshold value;
and the second detection module is used for detecting all vehicle tracking objects in the next frame of image of the area to be detected if the number is less than the preset threshold value.
14. A vehicle detection and tracking device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the vehicle detection and tracking method according to any one of claims 1 to 12 when executing the computer program.
15. A computer-readable storage medium for storing a computer program which, when executed by a processor, performs the steps of the vehicle detection and tracking method according to any one of claims 1 to 12.
CN201811399560.3A 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium Active CN109543610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811399560.3A CN109543610B (en) 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811399560.3A CN109543610B (en) 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109543610A true CN109543610A (en) 2019-03-29
CN109543610B CN109543610B (en) 2022-11-11

Family

ID=65850184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811399560.3A Active CN109543610B (en) 2018-11-22 2018-11-22 Vehicle detection tracking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109543610B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784794A (en) * 2021-01-29 2021-05-11 深圳市捷顺科技实业股份有限公司 Vehicle parking state detection method and device, electronic equipment and storage medium
CN116758494A (en) * 2023-08-23 2023-09-15 深圳市科灵通科技有限公司 Intelligent monitoring method and system for vehicle-mounted video of internet-connected vehicle

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159063A (en) * 2007-11-13 2008-04-09 上海龙东光电子有限公司 Hyper complex crosscorrelation and target centre distance weighting combined tracking algorithm
JP2010039617A (en) * 2008-08-01 2010-02-18 Toyota Central R&D Labs Inc Object tracking device and program
CN101800890A (en) * 2010-04-08 2010-08-11 北京航空航天大学 Multiple vehicle video tracking method in expressway monitoring scene
US20110026770A1 (en) * 2009-07-31 2011-02-03 Jonathan David Brookshire Person Following Using Histograms of Oriented Gradients
CN102867411A (en) * 2012-09-21 2013-01-09 博康智能网络科技股份有限公司 Taxi dispatching method and taxi dispatching system on basis of video monitoring system
CN102867416A (en) * 2012-09-13 2013-01-09 中国科学院自动化研究所 Vehicle part feature-based vehicle detection and tracking method
JP2013171319A (en) * 2012-02-17 2013-09-02 Toshiba Corp Vehicle state detection device, vehicle behavior detection device and vehicle state detection method
CN103456030A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Target tracking method based on scattering descriptor
CN104463238A (en) * 2014-12-19 2015-03-25 深圳市捷顺科技实业股份有限公司 License plate recognition method and system
US20150310624A1 (en) * 2014-04-24 2015-10-29 Xerox Corporation Method and system for partial occlusion handling in vehicle tracking using deformable parts model
CN105354857A (en) * 2015-12-07 2016-02-24 北京航空航天大学 Matching method for vehicle track shielded by overpass
CN105844669A (en) * 2016-03-28 2016-08-10 华中科技大学 Video target real-time tracking method based on partial Hash features
CN107066968A (en) * 2017-04-12 2017-08-18 湖南源信光电科技股份有限公司 The vehicle-mounted pedestrian detection method of convergence strategy based on target recognition and tracking
CN108765452A (en) * 2018-05-11 2018-11-06 西安天和防务技术股份有限公司 A kind of detection of mobile target in complex background and tracking

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159063A (en) * 2007-11-13 2008-04-09 上海龙东光电子有限公司 Hyper complex crosscorrelation and target centre distance weighting combined tracking algorithm
JP2010039617A (en) * 2008-08-01 2010-02-18 Toyota Central R&D Labs Inc Object tracking device and program
US20110026770A1 (en) * 2009-07-31 2011-02-03 Jonathan David Brookshire Person Following Using Histograms of Oriented Gradients
CN101800890A (en) * 2010-04-08 2010-08-11 北京航空航天大学 Multiple vehicle video tracking method in expressway monitoring scene
JP2013171319A (en) * 2012-02-17 2013-09-02 Toshiba Corp Vehicle state detection device, vehicle behavior detection device and vehicle state detection method
CN102867416A (en) * 2012-09-13 2013-01-09 中国科学院自动化研究所 Vehicle part feature-based vehicle detection and tracking method
CN102867411A (en) * 2012-09-21 2013-01-09 博康智能网络科技股份有限公司 Taxi dispatching method and taxi dispatching system on basis of video monitoring system
CN103456030A (en) * 2013-09-08 2013-12-18 西安电子科技大学 Target tracking method based on scattering descriptor
US20150310624A1 (en) * 2014-04-24 2015-10-29 Xerox Corporation Method and system for partial occlusion handling in vehicle tracking using deformable parts model
CN104463238A (en) * 2014-12-19 2015-03-25 深圳市捷顺科技实业股份有限公司 License plate recognition method and system
CN105354857A (en) * 2015-12-07 2016-02-24 北京航空航天大学 Matching method for vehicle track shielded by overpass
CN105844669A (en) * 2016-03-28 2016-08-10 华中科技大学 Video target real-time tracking method based on partial Hash features
CN107066968A (en) * 2017-04-12 2017-08-18 湖南源信光电科技股份有限公司 The vehicle-mounted pedestrian detection method of convergence strategy based on target recognition and tracking
CN108765452A (en) * 2018-05-11 2018-11-06 西安天和防务技术股份有限公司 A kind of detection of mobile target in complex background and tracking

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LU WANG: "Combining Color Histogram and ORB Features for Robust Visual Tracking", 《2012 8TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION (ICNC 2012)》 *
MENG FANQING: "A Tracking Algorithm Based on ORB", 《2013 INTERNATIONAL CONFERENCE ON MECHATRONIC SCIENCES, ELECTRIC ENGINEERING AND COMPUTER (MEC)》 *
SENIT_CO: "图像特征描述子之ORB", 《HTTPS://BLOG.CSDN.NET/ZACHARY_CO/ARTICLE/DETAILS/78872012》 *
林春丽等: "改进的ORB算法在有遮挡的车辆跟踪上的应用", 《计算机工程与设计》 *
郝志成: "复杂背景下目标的快速提取与跟踪", 《吉林大学学报(工学版)》 *
高建哲等: "基于ORB特征点的多目标匹配", 《机电工程技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784794A (en) * 2021-01-29 2021-05-11 深圳市捷顺科技实业股份有限公司 Vehicle parking state detection method and device, electronic equipment and storage medium
CN112784794B (en) * 2021-01-29 2024-02-02 深圳市捷顺科技实业股份有限公司 Vehicle parking state detection method and device, electronic equipment and storage medium
CN116758494A (en) * 2023-08-23 2023-09-15 深圳市科灵通科技有限公司 Intelligent monitoring method and system for vehicle-mounted video of internet-connected vehicle
CN116758494B (en) * 2023-08-23 2023-12-22 深圳市科灵通科技有限公司 Intelligent monitoring method and system for vehicle-mounted video of internet-connected vehicle

Also Published As

Publication number Publication date
CN109543610B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN110807385B (en) Target detection method, target detection device, electronic equipment and storage medium
CN112085952B (en) Method and device for monitoring vehicle data, computer equipment and storage medium
EP3410351B1 (en) Learning program, learning method, and object detection device
CN108846835B (en) Image change detection method based on depth separable convolutional network
CN109033950B (en) Vehicle illegal parking detection method based on multi-feature fusion cascade depth model
CN108268867B (en) License plate positioning method and device
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN110502982A (en) The method, apparatus and computer equipment of barrier in a kind of detection highway
CN110826484A (en) Vehicle weight recognition method and device, computer equipment and model training method
CN112116556B (en) Passenger flow volume statistics method and device and computer equipment
CN108710841B (en) Human face living body detection device and method based on MEMs infrared array sensor
CN112784724A (en) Vehicle lane change detection method, device, equipment and storage medium
CN109543647A (en) A kind of road abnormality recognition method, device, equipment and medium
CN109543610B (en) Vehicle detection tracking method, device, equipment and storage medium
CN111627057A (en) Distance measuring method and device and server
CN111753642B (en) Method and device for determining key frame
CN108304852B (en) Method and device for determining road section type, storage medium and electronic device
CN114005105B (en) Driving behavior detection method and device and electronic equipment
CN108710828A (en) The method, apparatus and storage medium and vehicle of identification object
CN111582278B (en) Portrait segmentation method and device and electronic equipment
CN110765940B (en) Target object statistical method and device
CN115083008A (en) Moving object detection method, device, equipment and storage medium
CN113888740A (en) Method and device for determining binding relationship between target license plate frame and target vehicle frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant