CN110807791A - Night vehicle target tracking method and device - Google Patents
Night vehicle target tracking method and device Download PDFInfo
- Publication number
- CN110807791A CN110807791A CN201911054028.2A CN201911054028A CN110807791A CN 110807791 A CN110807791 A CN 110807791A CN 201911054028 A CN201911054028 A CN 201911054028A CN 110807791 A CN110807791 A CN 110807791A
- Authority
- CN
- China
- Prior art keywords
- car light
- pair
- car
- light pair
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for tracking a vehicle target at night, wherein the method comprises the following steps: establishing a back projection surface at a preset position; acquiring an inverse projection image projected on an inverse projection surface; searching a local maximum value of the inverse projection diagram by utilizing a search window principle, and determining a car light pair area; determining the car light pairs in the car light pair area through a preset car light pair Gaussian mixture model, and marking by using an image frame; establishing a rectangular window which takes the determined coordinates of the car lamp pair and the calculated preset offset of two adjacent frames as the center and takes the length and the width of an image frame as the size; searching the car light pair in the rectangular window, determining that the car tracking is successful after judging that the searched car light pair is the same as the car light pair of the previous frame, recovering three-dimensional space geometric information of the car light pair by using a back projection surface, accurately marking the car light pair by using a preset car light pair Gaussian mixture model, and finally tracking and judging the car light pair according to the marking condition, thereby realizing the detection and tracking of the car at night.
Description
Technical Field
The embodiment of the invention relates to the technical field of vehicle identification, in particular to a method and a device for tracking a vehicle target at night.
Background
Nowadays, the urbanization phenomenon is increasingly serious, and the intelligent traffic is rapidly developed. The target recognition is an important component of computer vision, drives the development of a vehicle detection and recognition system, and has important practical significance.
With the improvement of the informatization level of intelligent traffic management, the off-site law enforcement system becomes a hot spot for research and development at home and abroad. The off-site law enforcement system mainly needs to realize that according to the recorded data of the technical monitoring equipment, the party of the illegal and over-limited transport vehicle is penalized legally. When identifying and detecting a vehicle at night, how to accurately locate the vehicle and track the target of the vehicle is a key step.
At night, the light is dark, the visibility of the vehicle body is low, and the halo of the vehicle lamp is added, so that the background is easily confused, and therefore, how to realize the detection and tracking of the vehicle at night is a technical problem which needs to be solved urgently by a person skilled in the art.
Disclosure of Invention
The embodiment of the invention provides a method and a device for tracking a night vehicle target, which realize detection and tracking of a night vehicle.
In view of the above, a first aspect of the present invention provides a method for tracking a vehicle target at night, the method comprising:
establishing a back projection surface at a preset position;
acquiring an inverse projection image projected on the inverse projection surface;
searching a local maximum value of the inverse projection diagram by utilizing a search window principle, and determining a car light pair area;
determining the car light pairs in the car light pair area through a preset car light pair Gaussian mixture model, and marking by using an image frame;
establishing a rectangular window which takes the determined coordinates of the car light pair and the calculated preset offset of two adjacent frames as a center and takes the length and the width of the image frame as the size;
and searching the car lamp pair in the rectangular window, and determining that the vehicle tracking is successful after judging that the searched car lamp pair is the same as the car lamp pair in the previous frame.
Optionally, after determining the car light pair region, the method further includes:
preliminarily marking locations of pairs of lights in the pair of lights area using a flood fill algorithm.
Optionally, before the establishing the back projection plane at the preset position, the method further includes:
and carrying out statistical probability modeling on the acquired horizontal distance of the car light pairs and the height of the car light pairs relative to the road surface to obtain a preset car light pair Gaussian mixture model.
Optionally, the preset offset is specifically an upper limit of the road speed limit divided by the video frame rate.
Optionally, after determining that the searched headlight pair is the same as the headlight pair of the previous frame, determining that the vehicle tracking is successful specifically includes:
calculating a first average area and a first circularity of the searched car lamp pair;
calculating a second average area and a second circularity of the pair of vehicle lights of a previous frame;
and judging whether a first difference value of the first average area and the second average area is within a first preset range or not, and whether a second difference value of the first circularity and the second circularity is within a second preset range or not, if so, judging that the searched car light pair is the same as the car light pair of the previous frame, determining that the car is successfully tracked, otherwise, judging that the searched car light pair is different from the car light pair of the previous frame, and returning to reestablish the rectangular window.
A second aspect of the present invention provides a night vehicle target tracking apparatus, the apparatus comprising:
the device comprises an establishing unit, a processing unit and a control unit, wherein the establishing unit is used for establishing a reverse projection surface at a preset position;
the acquisition unit is used for acquiring an inverse projection image projected on the inverse projection surface;
the area positioning unit is used for searching the local maximum value of the inverse projection diagram by utilizing a search window principle and determining a car light pair area;
the vehicle lamp pair positioning unit is used for determining a vehicle lamp pair in the vehicle lamp pair area through a preset vehicle lamp pair Gaussian mixture model and adopting an image frame mark;
the window establishing unit is used for establishing a rectangular window which takes the determined coordinates of the car light pair and the calculated preset offset of two adjacent frames as the center and takes the length and the width of the image frame as the size;
and the tracking unit is used for searching the car light pair in the rectangular window and determining that the vehicle tracking is successful after the searched car light pair is judged to be the same as the car light pair of the previous frame.
Optionally, the method further includes:
and the preliminary positioning unit is used for preliminarily marking the positions of the car light pairs in the car light pair area by using a flood filling algorithm.
Optionally, the method further includes:
and the modeling unit is used for carrying out statistical probability modeling on the acquired horizontal distance of the car light pairs and the height of the car light pairs relative to the road surface to obtain a preset car light-to-Gaussian mixture model.
Optionally, the preset offset is specifically an upper limit of the road speed limit divided by the video frame rate.
Optionally, the tracking unit is specifically configured to:
calculating a first average area and a first circularity of the searched car lamp pair;
calculating a second average area and a second circularity of the pair of vehicle lights of a previous frame;
and judging whether a first difference value of the first average area and the second average area is within a first preset range or not, and whether a second difference value of the first circularity and the second circularity is within a second preset range or not, if so, judging that the searched car light pair is the same as the car light pair of the previous frame, determining that the car is successfully tracked, otherwise, judging that the searched car light pair is different from the car light pair of the previous frame, and returning to reestablish the rectangular window.
According to the technical scheme, the embodiment of the invention has the following advantages:
the embodiment of the invention provides a night vehicle target tracking method, which is characterized in that three-dimensional space geometric information of a vehicle lamp pair is recovered by using a back projection plane, a preset vehicle lamp pair Gaussian mixture model is used for realizing accurate marking of the vehicle lamp pair, and finally, the vehicle lamp pair is tracked and judged according to the marking condition, so that the detection and tracking of night vehicles are realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of a method for night vehicle target tracking in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of another method of night vehicle target tracking in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a night vehicle target tracking device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention designs a night vehicle target tracking method and device, which realize the detection and tracking of vehicles at night.
For convenience of understanding, please refer to fig. 1, in which fig. 1 is a flowchart illustrating a method for tracking a target of a vehicle at night according to an embodiment of the present invention, as shown in fig. 1, specifically, the method includes:
101. establishing a back projection surface at a preset position;
it should be noted that, firstly, a back projection plane needs to be predetermined in a calibrated traffic scene, and the designed back projection plane is determined according to the characteristics and the spatial position of the car light pair to be detected at night. The reverse projection plane parallel to the road surface, perpendicular to the road surface or at an angle to the road surface can be set according to different scenes.
102. Acquiring an inverse projection image projected on the inverse projection surface;
it should be noted that each pixel point in the back projection graph copies information of each grid of the back projection surface, and the information of each pixel point is information of each grid on the back projection surface. Because each grid on the back projection surface contains grid information corresponding to the real size in the three-dimensional space, each pixel point on the two-dimensional graph of the back projection graph also represents the grid information of the real size in the three-dimensional space.
103. Searching a local maximum value of the inverse projection diagram by utilizing a search window principle, and determining a car light pair area;
it should be noted that the vehicle is roughly positioned by using the distinctive features of the headlights because the light is too weak. According to the principle that the pixel value of the headlight is maximum relative to other pixels in the neighborhood at night, the local maximum value in the inverse projection image is searched by utilizing the search window principle, so that the headlight pair area is determined. It is understood that the localized lamp pair region includes not only the position of the lamp pair, but also a lamp reflection region with a relatively large brightness value or other interfering non-lamp regions.
104. Determining the car light pairs in the car light pair area through a preset car light pair Gaussian mixture model, and marking by using an image frame;
after the lamp pair region is determined, the lamp pair has some obvious features, such as height, width, horizontal and vertical spacing of the lamp pair, area similarity, etc., according to the spatial geometry principle of the lamp pair of the vehicle. And searching a car lamp pair target from the determined car lamp pair area by using a preset car lamp pair Gaussian mixture model, and marking the car lamp pair target by using an image frame.
105. Establishing a rectangular window which takes the determined coordinates of the car light pair and the calculated preset offset of two adjacent frames as a center and takes the length and the width of the image frame as the size;
it should be noted that after the car light pair is determined, the car light pair needs to be used to track the car, and a rectangular window with the length and width of the image frame as the center is established by adding the coordinates of the determined car light pair and the calculated preset offset of two adjacent frames.
106. Searching the car light pair in the rectangular window, and determining that the vehicle tracking is successful after judging that the searched car light pair is the same as the car light pair in the previous frame;
it should be noted that, for target tracking of an automobile lamp pair, the same judgment needs to be performed on front and rear frames of a video, and when the automobile lamp pair is searched in a rectangular window of a next frame, the successful vehicle tracking can be determined only after the searched automobile lamp pair is determined to be the same as the automobile lamp pair of the previous frame, otherwise the target is lost.
The embodiment of the invention provides a night vehicle target tracking method, which is characterized in that three-dimensional space geometric information of a vehicle lamp pair is recovered by using a back projection plane, a preset vehicle lamp pair Gaussian mixture model is used for realizing accurate marking of the vehicle lamp pair, and finally, the vehicle lamp pair is tracked and judged according to the marking condition, so that the detection and tracking of night vehicles are realized.
Referring to fig. 2, fig. 2 is another flowchart of a method for tracking a target of a vehicle at night according to an embodiment of the present invention, and as shown in fig. 2, the method specifically includes:
201. carrying out statistical probability modeling on the acquired horizontal distance of the car light pairs and the height of the car light pairs relative to the road surface to obtain a preset car light pair Gaussian mixture model;
it should be noted that, in the collected large number of car light pair samples, the statistical probability modeling is performed on the horizontal distance between the car light pairs and the height of the car light pairs relative to the road surface, so that a preset car light pair gaussian mixture model can be obtained. And taking the probability that the detected characteristic variables of the car lamp pairs conform to the constructed Gaussian mixture model as a basis for judging that the two car lamps conform to the same car, wherein only the car lamp pairs with the probability larger than a preset threshold value can be marked as a pair of car lamp pairs.
202. Establishing a back projection surface at a preset position;
it should be noted that, firstly, a back projection plane needs to be predetermined in a calibrated traffic scene, and the designed back projection plane is determined according to the characteristics and the spatial position of the car light pair to be detected at night. The reverse projection plane parallel to the road surface, perpendicular to the road surface or at an angle to the road surface can be set according to different scenes.
203. Acquiring an inverse projection image projected on the inverse projection surface;
it should be noted that each pixel point in the back projection graph copies information of each grid of the back projection surface, and the information of each pixel point is information of each grid on the back projection surface. Because each grid on the back projection surface contains grid information corresponding to the real size in the three-dimensional space, each pixel point on the two-dimensional graph of the back projection graph also represents the grid information of the real size in the three-dimensional space.
204. Searching a local maximum value of the inverse projection diagram by utilizing a search window principle, and determining a car light pair area;
it should be noted that the vehicle is roughly positioned by using the distinctive features of the headlights because the light is too weak. According to the principle that the pixel value of the headlight is maximum relative to other pixels in the neighborhood at night, the local maximum value in the inverse projection image is searched by utilizing the search window principle, so that the headlight pair area is determined. It is understood that the localized lamp pair region includes not only the position of the lamp pair, but also a lamp reflection region with a relatively large brightness value or other interfering non-lamp regions.
205. Preliminarily marking the positions of the car light pairs in the car light pair area by using a flood filling algorithm;
it should be noted that, after the car light pair area is determined, since the located car light pair area includes not only the location of the car light pair, but also a car light reflection area with a relatively large luminance value or other non-car light areas with interference, the location of the car light pair needs to be preliminarily marked by using a flood filling algorithm.
206. Determining the car light pairs in the car light pair area through a preset car light pair Gaussian mixture model, and marking by using an image frame;
after the lamp pair region is determined, the lamp pair has some obvious features, such as height, width, horizontal and vertical spacing of the lamp pair, area similarity, etc., according to the spatial geometry principle of the lamp pair of the vehicle. And searching a car lamp pair target from the determined car lamp pair area by using a preset car lamp pair Gaussian mixture model, and marking the car lamp pair target by using an image frame.
207. Establishing a rectangular window which takes the determined coordinates of the car light pair and the calculated preset offset of two adjacent frames as a center and takes the length and the width of the image frame as the size;
it should be noted that after the car light pair is determined, the car light pair needs to be used to track the car, and a rectangular window with the length and width of the image frame as the center is established by adding the coordinates of the determined car light pair and the calculated preset offset of two adjacent frames.
208. Searching the car lamp pair in the rectangular window;
209. calculating a first average area and a first circularity of the searched car lamp pair;
210. calculating a second average area and a second circularity of the pair of vehicle lights of a previous frame;
the average area of the lamp pair can be represented by the sum of the number of pixels of the lamp pair region, and the circularity is calculated by the number of pixels at the boundary of the lamp pair region.
211. Judging whether a first difference value between the first average area and the second average area is within a first preset range or not, and whether a second difference value between the first circularity and the second circularity is within a second preset range or not, if so, judging that the searched car light pair is the same as the car light pair of the previous frame, determining that the car tracking is successful, otherwise, judging that the searched car light pair is not the same as the car light pair of the previous frame, and returning to the step 207 to reestablish the rectangular window;
it should be noted that only the front and rear frame car light pairs meeting the two conditions at the same time can be determined as successful tracking, otherwise, the car may leave the scene or the frame may be lost due to external reasons, and the car may be subsequently detected by using the method.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a night vehicle target tracking device according to an embodiment of the present invention, and as shown in fig. 3, the structure specifically includes:
an establishing unit 301, configured to establish a back projection plane at a preset position;
an obtaining unit 302, configured to obtain an inverse projection map projected on the inverse projection plane;
the region positioning unit 303 is configured to find a local maximum value of the inverse projection map by using a search window principle, and determine a vehicle lamp pair region;
the car light pair positioning unit 304 is used for determining car light pairs in the car light pair area through a preset car light pair Gaussian mixture model and adopting image frame marks;
a window establishing unit 305, configured to establish a rectangular window with the length and width of the image frame as the center, where the determined coordinates of the car light pair and the calculated preset offset of the two adjacent frames are added;
and the tracking unit 306 is configured to search the car light pair in the rectangular window, and determine that the vehicle tracking is successful after determining that the searched car light pair is the same as the car light pair in the previous frame.
Optionally, the method further includes:
a preliminary positioning unit 307 for preliminarily marking the positions of the pairs of vehicle lamps in the pair of vehicle lamps area using a flood filling algorithm.
Optionally, the method further includes:
and the modeling unit 308 is configured to perform statistical probability modeling on the acquired horizontal distance between the car light pairs and the height of the car light pairs relative to the road surface to obtain a preset car light pair gaussian mixture model.
Optionally, the preset offset is specifically an upper limit of the road speed limit divided by the video frame rate.
Optionally, the tracking unit 306 is specifically configured to:
calculating a first average area and a first circularity of the searched car lamp pair;
calculating a second average area and a second circularity of the pair of vehicle lights of a previous frame;
and judging whether a first difference value of the first average area and the second average area is within a first preset range or not, and whether a second difference value of the first circularity and the second circularity is within a second preset range or not, if so, judging that the searched car light pair is the same as the car light pair of the previous frame, determining that the car is successfully tracked, otherwise, judging that the searched car light pair is different from the car light pair of the previous frame, and returning to reestablish the rectangular window.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The foregoing description of the embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same elements or features may also vary in many respects. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
Example embodiments are provided so that this disclosure will be thorough and will fully convey the scope to those skilled in the art. Numerous details are set forth, such as examples of specific parts, devices, and methods, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In certain example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises" and "comprising" are intended to be inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed and illustrated, unless explicitly indicated as an order of performance. It should also be understood that additional or alternative steps may be employed.
When an element or layer is referred to as being "on" … … "," engaged with "… …", "connected to" or "coupled to" another element or layer, it can be directly on, engaged with, connected to or coupled to the other element or layer, or intervening elements or layers may also be present. In contrast, when an element or layer is referred to as being "directly on … …," "directly engaged with … …," "directly connected to" or "directly coupled to" another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship of elements should be interpreted in a similar manner (e.g., "between … …" and "directly between … …", "adjacent" and "directly adjacent", etc.). As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region or section from another element, component, region or section. Unless clearly indicated by the context, use of terms such as the terms "first," "second," and other numerical values herein does not imply a sequence or order. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
Spatially relative terms, such as "inner," "outer," "below," "… …," "lower," "above," "upper," and the like, may be used herein for ease of description to describe a relationship between one element or feature and one or more other elements or features as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the example term "below … …" can encompass both an orientation of facing upward and downward. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted.
Claims (10)
1. A method for night vehicle target tracking, comprising:
establishing a back projection surface at a preset position;
acquiring an inverse projection image projected on the inverse projection surface;
searching a local maximum value of the inverse projection diagram by utilizing a search window principle, and determining a car light pair area;
determining the car light pairs in the car light pair area through a preset car light pair Gaussian mixture model, and marking by using an image frame;
establishing a rectangular window which takes the determined coordinates of the car light pair and the calculated preset offset of two adjacent frames as a center and takes the length and the width of the image frame as the size;
and searching the car lamp pair in the rectangular window, and determining that the vehicle tracking is successful after judging that the searched car lamp pair is the same as the car lamp pair in the previous frame.
2. The night vehicle target tracking method of claim 1, further comprising, after determining the headlight-pair region:
preliminarily marking locations of pairs of lights in the pair of lights area using a flood fill algorithm.
3. The night vehicle target tracking method of claim 2, further comprising, before establishing the backprojection surface at the predetermined location:
and carrying out statistical probability modeling on the acquired horizontal distance of the car light pairs and the height of the car light pairs relative to the road surface to obtain a preset car light pair Gaussian mixture model.
4. The method for tracking the target of the vehicle at night according to claim 3, wherein the preset offset is specifically an upper limit of a road speed limit divided by a video frame rate.
5. The night vehicle target tracking method according to claim 4, wherein the determining that the vehicle tracking is successful after determining that the searched headlight pair is the same as the headlight pair of the previous frame specifically comprises:
calculating a first average area and a first circularity of the searched car lamp pair;
calculating a second average area and a second circularity of the pair of vehicle lights of a previous frame;
and judging whether a first difference value of the first average area and the second average area is within a first preset range or not, and whether a second difference value of the first circularity and the second circularity is within a second preset range or not, if so, judging that the searched car light pair is the same as the car light pair of the previous frame, determining that the car is successfully tracked, otherwise, judging that the searched car light pair is different from the car light pair of the previous frame, and returning to reestablish the rectangular window.
6. A night vehicle object tracking device, comprising:
the device comprises an establishing unit, a processing unit and a control unit, wherein the establishing unit is used for establishing a reverse projection surface at a preset position;
the acquisition unit is used for acquiring an inverse projection image projected on the inverse projection surface;
the area positioning unit is used for searching the local maximum value of the inverse projection diagram by utilizing a search window principle and determining a car light pair area;
the vehicle lamp pair positioning unit is used for determining a vehicle lamp pair in the vehicle lamp pair area through a preset vehicle lamp pair Gaussian mixture model and adopting an image frame mark;
the window establishing unit is used for establishing a rectangular window which takes the determined coordinates of the car light pair and the calculated preset offset of two adjacent frames as the center and takes the length and the width of the image frame as the size;
and the tracking unit is used for searching the car light pair in the rectangular window and determining that the vehicle tracking is successful after the searched car light pair is judged to be the same as the car light pair of the previous frame.
7. The night vehicle object tracking device of claim 6, further comprising:
and the preliminary positioning unit is used for preliminarily marking the positions of the car light pairs in the car light pair area by using a flood filling algorithm.
8. The night vehicle object tracking device of claim 7, further comprising:
and the modeling unit is used for carrying out statistical probability modeling on the acquired horizontal distance of the car light pairs and the height of the car light pairs relative to the road surface to obtain a preset car light-to-Gaussian mixture model.
9. The night vehicle object tracking device of claim 8, wherein the predetermined offset is specifically an upper limit of a road speed limit divided by a video frame rate.
10. The night vehicle object tracking device of claim 9, wherein the tracking unit is specifically configured to:
calculating a first average area and a first circularity of the searched car lamp pair;
calculating a second average area and a second circularity of the pair of vehicle lights of a previous frame;
and judging whether a first difference value of the first average area and the second average area is within a first preset range or not, and whether a second difference value of the first circularity and the second circularity is within a second preset range or not, if so, judging that the searched car light pair is the same as the car light pair of the previous frame, determining that the car is successfully tracked, otherwise, judging that the searched car light pair is different from the car light pair of the previous frame, and returning to reestablish the rectangular window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911054028.2A CN110807791A (en) | 2019-10-31 | 2019-10-31 | Night vehicle target tracking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911054028.2A CN110807791A (en) | 2019-10-31 | 2019-10-31 | Night vehicle target tracking method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110807791A true CN110807791A (en) | 2020-02-18 |
Family
ID=69489932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911054028.2A Pending CN110807791A (en) | 2019-10-31 | 2019-10-31 | Night vehicle target tracking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110807791A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009069924A (en) * | 2007-09-11 | 2009-04-02 | Hitachi Ltd | Traffic situation prediction device and traffic situation prediction method |
CN103150898A (en) * | 2013-01-25 | 2013-06-12 | 大唐移动通信设备有限公司 | Method and device for detection of vehicle at night and method and device for tracking of vehicle at night |
CN105303160A (en) * | 2015-09-21 | 2016-02-03 | 中电海康集团有限公司 | Method for detecting and tracking vehicles at night |
CN105718923A (en) * | 2016-03-07 | 2016-06-29 | 长安大学 | Method for vehicle detection and counting at night based on inverse projection drawings |
CN106295528A (en) * | 2016-08-01 | 2017-01-04 | 长安大学 | A kind of vehicle checking method based on multi-part spatial relation GMM modeling |
-
2019
- 2019-10-31 CN CN201911054028.2A patent/CN110807791A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009069924A (en) * | 2007-09-11 | 2009-04-02 | Hitachi Ltd | Traffic situation prediction device and traffic situation prediction method |
CN103150898A (en) * | 2013-01-25 | 2013-06-12 | 大唐移动通信设备有限公司 | Method and device for detection of vehicle at night and method and device for tracking of vehicle at night |
CN105303160A (en) * | 2015-09-21 | 2016-02-03 | 中电海康集团有限公司 | Method for detecting and tracking vehicles at night |
CN105718923A (en) * | 2016-03-07 | 2016-06-29 | 长安大学 | Method for vehicle detection and counting at night based on inverse projection drawings |
CN106295528A (en) * | 2016-08-01 | 2017-01-04 | 长安大学 | A kind of vehicle checking method based on multi-part spatial relation GMM modeling |
Non-Patent Citations (2)
Title |
---|
柳长源等: ""基于车前灯的夜间车辆视频检测研究"", 《计算机工程与应用》 * |
陈艳等: ""基于高斯混合模型和AdaBoost的夜间车辆检测"", 《计算机应用》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
USRE48106E1 (en) | Detection of obstacles at night by analysis of shadows | |
US9292750B2 (en) | Method and apparatus for detecting traffic monitoring video | |
CN110210280B (en) | Beyond-visual-range sensing method, beyond-visual-range sensing system, terminal and storage medium | |
US9443154B2 (en) | Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications | |
US9489586B2 (en) | Traffic sign recognizing apparatus and operating method thereof | |
CN110443225B (en) | Virtual and real lane line identification method and device based on feature pixel statistics | |
Jazayeri et al. | Vehicle detection and tracking in car video based on motion model | |
CN104776849B (en) | Vehicle positioning device and method | |
EP3159828B1 (en) | Adaptive calibration using visible car details | |
CN108021856B (en) | Vehicle tail lamp identification method and device and vehicle | |
CN101470806B (en) | Vehicle lamp detection method and apparatus, interested region splitting method and apparatus | |
WO2019135246A1 (en) | A multi-spectral system for providing pre-collision alerts | |
CN108460968A (en) | A kind of method and device obtaining traffic information based on car networking | |
JP2002083297A (en) | Object recognition method and object recognition device | |
Lin et al. | Lane departure and front collision warning using a single camera | |
CN110619674B (en) | Three-dimensional augmented reality equipment and method for accident and alarm scene restoration | |
CN113029185B (en) | Road marking change detection method and system in crowdsourcing type high-precision map updating | |
CN108154146A (en) | A kind of car tracing method based on image identification | |
Wu et al. | A Real‐Time Embedded Blind Spot Safety Assistance System | |
Ponsa et al. | On-board image-based vehicle detection and tracking | |
CN106570487A (en) | Method and device for predicting collision between objects | |
CN107506753B (en) | Multi-vehicle tracking method for dynamic video monitoring | |
CN112885108A (en) | Vehicle change detection method and system on parking space based on deep learning algorithm | |
JP2003281700A (en) | Cutting-in vehicle detecting device and method | |
Ashraf et al. | HVD-net: a hybrid vehicle detection network for vision-based vehicle tracking and speed estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200218 |