CN109859240B - Video object tracking method and device and vehicle - Google Patents
Video object tracking method and device and vehicle Download PDFInfo
- Publication number
- CN109859240B CN109859240B CN201711243752.0A CN201711243752A CN109859240B CN 109859240 B CN109859240 B CN 109859240B CN 201711243752 A CN201711243752 A CN 201711243752A CN 109859240 B CN109859240 B CN 109859240B
- Authority
- CN
- China
- Prior art keywords
- tracking
- identification
- frame image
- video
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The invention discloses a video object tracking method, a video object tracking device and a vehicle, wherein the method comprises the following steps: acquiring video information of a current scene in real time, and performing object identification on each frame of image in the video information to obtain an identification result; calculating the coincidence degree between the identification objects of two adjacent frames according to the identification result corresponding to the previous frame image and the identification result corresponding to the current frame image; judging the contact ratio, and updating a tracking value according to a judgment result; judging the tracking value; and when the tracking value is larger than the first tracking threshold value, judging that the target vehicle is effectively tracked. The tracking method of the invention can be applied to the object identification and tracking of a detection scene with a large range by comparing the contact ratio of the identified objects of two continuous frames of videos.
Description
Technical Field
The invention relates to the technical field of object tracking, in particular to a video object tracking method, a video object tracking device and a vehicle with the video object tracking device.
Background
Object recognition is an important component of video image processing, on the basis of which object tracking is achieved. The result of the object tracking can be fed back to the object recognition, and the confidence coefficient of the object recognition is improved.
In the related art, an object tracking method is: detecting whether the area of each moving object in the previous frame of video image in the moving object tracking database is overlapped with the area of each moving object in the current frame of video image in the moving object reference database; under the condition that the overlapping part exists between the area of the moving target in the moving target tracking database in the previous frame of video image and the area of the moving target in the current frame of video image in the moving target reference database, the area of the overlapping part is calculated; judging whether the area is larger than a first preset threshold value or not; updating the tracking particle information of the moving target under the condition that the judgment area is larger than a first preset threshold value; and calculating the area of the moving target in the current frame video image according to the updated tracking particle information, thereby realizing the tracking of the moving target.
However, the above method is based on whether the area is larger than the first preset threshold, which is feasible for detecting motion in the monitored scene, and the threshold is easier to determine because the range in the monitored scene is not large. But is not sufficient for a vehicle detection scene, the detected range is from less than 1m to near 100m, and the area occupied by the target object (vehicle) in the video image is greatly changed. Although the objects in the continuous frame images have overlapping portions, the area of the overlapping portions varies widely. Since the area occupied by the objects included in the continuous frames is large at the near side and small at the far side, the judgment threshold value for the overlapping area of the same recognition object at the near side is not suitable for the tracking of the far object, and the threshold value at the far side is not suitable for the tracking of the near object.
Disclosure of Invention
The present invention is directed to solving at least one of the problems in the art to some extent. Therefore, a first object of the present invention is to provide a video object tracking method, which can adapt to object identification and tracking of a detection scene with a large range by comparing the coincidence degree of identified objects of two consecutive frames of video.
A second object of the invention is to propose a non-transitory computer-readable storage medium.
A third object of the present invention is to provide a video object tracking apparatus.
A fourth object of the invention is to propose a vehicle.
In order to achieve the above object, a video object tracking method according to an embodiment of the first aspect of the present invention includes the following steps: acquiring video information of a current scene in real time, and performing object identification on each frame of image in the video information to obtain an identification result; calculating the contact ratio between the identification objects of two adjacent frames according to the identification result corresponding to the previous frame image and the identification result corresponding to the current frame image, and updating a tracking value according to the contact ratio; judging the tracking value; and when the tracking value is larger than a first tracking threshold value, judging that the tracking of the identification object is effective.
According to the video object tracking method provided by the embodiment of the invention, the video information of the current scene is obtained in real time, the object recognition is carried out on each frame of image in the video information to obtain the recognition result, the contact ratio between the recognition objects of two adjacent frames is calculated according to the recognition result corresponding to the image of the previous frame and the recognition result corresponding to the image of the current frame, the tracking value is updated according to the contact ratio, the tracking value is judged, and when the tracking value is greater than the first tracking threshold value, the recognition object tracking is judged to be effective. The method can be applied to object identification and tracking of a detection scene with a large range by comparing the coincidence degree of the identified objects of two continuous frames of videos.
In addition, the video object tracking method proposed according to the above embodiment of the present invention may further have the following additional technical features:
according to an embodiment of the present invention, after the recognition object is valid, the motion direction and the motion trajectory of the recognition object are further obtained according to the recognition result corresponding to the previous frame image and the recognition result corresponding to the current frame image, the disappearance direction of the recognition object is pre-determined according to the motion direction and the motion trajectory of the recognition object, and the recognition object is determined to disappear when the pre-determination is valid and the tracking value is smaller than the second tracking threshold value.
According to one embodiment of the invention, the recognition result is represented by a rectangular box.
According to one embodiment of the invention, the degree of coincidence is calculated according to the following formula:
degree of coincidence =α×S 3 /S 1 +(1-α)×S 3 /S 2 ,
Wherein the content of the first and second substances,S 1 the area of the rectangular frame corresponding to the previous frame image,S 2 the area of the rectangular frame corresponding to the current frame image,S 3 the area of the overlapped part of the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image is determined,αis constant and 0 is less than or equal toα≤1。
According to one embodiment of the invention, updating the tracking value according to the degree of coincidence comprises: judging the contact ratio, and searching whether the identification object exists in a preset tracking object list; if the contact ratio is greater than a contact ratio threshold value and the identification object is in a tracking object list, accumulating on the basis of an initial tracking value until the tracking value reaches a preset limit value; if the contact ratio is less than or equal to a contact ratio threshold value and the identification object is in a tracking object list, decreasing the initial tracking value on the basis until the tracking value is equal to zero, and deleting the entry corresponding to the identification object in the tracking object list; and if the contact ratio is greater than a contact ratio threshold value and the identification object is not in the tracking object list, newly adding an item in the tracking object list and setting the tracking value of the newly added item to be one.
According to an embodiment of the present invention, acquiring a motion direction and a motion trajectory of the recognition object according to a recognition result corresponding to a previous frame image and a recognition result corresponding to a current frame image includes: connecting the center of the rectangular frame corresponding to the previous frame image with the center of the rectangular frame corresponding to the current frame image to obtain the motion track of the identification object; when an included angle between a central connecting line of the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image and the front view direction of the video collector is larger than a first preset angle, and the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image are distributed left and right, the motion direction of the identification object is transverse; when an included angle between a central connecting line of the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image and the front view direction of the video collector is smaller than a second preset angle, the motion direction of the identification object is longitudinal, wherein the second preset angle is smaller than the first preset angle.
According to an embodiment of the present invention, predicting a disappearing direction of the recognition object according to a moving direction and a moving trajectory of the target vehicle includes: when the motion direction of the identification object is transverse, if the motion track of the identification object is moved to a preset video edge area, prejudging to be true; when the motion direction of the identification object is longitudinal, if the motion track of the identification object has moved to a preset vanishing line and the size of the rectangular frame corresponding to the current frame image is a preset minimum identification frame, prejudging to be true; and when the motion direction of the identification object is longitudinal, if the size of the rectangular frame corresponding to the current frame image is a preset maximum identification frame, prejudging to be true.
To achieve the above object, a second aspect of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the above video object tracking method.
The non-transitory computer readable storage medium of the embodiment of the present invention can adapt to object recognition and tracking of a detection scene with a large range by comparing the coincidence degree of recognized objects of two consecutive frames of video by executing the stored computer program.
In order to achieve the above object, a third aspect of the present invention provides a video object tracking apparatus, including a memory, a processor, and a video object tracking program stored in the memory and executable on the processor, wherein when being executed by the processor, a control program of the video object tracking apparatus implements the steps of the video object tracking method.
The video object tracking device provided by the embodiment of the invention can be applied to object identification and tracking of a detection scene with a large range by comparing the coincidence degree of the identified objects of two continuous frames of videos.
In order to achieve the above object, a fourth aspect of the present invention provides a vehicle including the above video object tracking apparatus.
According to the vehicle provided by the embodiment of the invention, through the video object tracking device, the coincidence degree of the identified objects of two continuous frames of videos is compared, and the object identification and tracking of a detection scene with a large application range can be realized.
Drawings
FIG. 1 is a flow diagram of a video object tracking method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of the size of a rectangular box of a first frame during close range recognition in the video object tracking method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the size of a rectangular box of a second frame during close range recognition in the video object tracking method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the size of a rectangular frame of an nth frame during long-distance recognition in the video object tracking method according to an embodiment of the present invention;
fig. 5 is a schematic size diagram of a rectangular frame of the (n + 1) th frame during long-distance recognition in the video object tracking method according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A video object tracking method, a video object tracking apparatus, and a vehicle having the video object tracking apparatus according to embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a video object tracking method according to an embodiment of the present invention.
As shown in fig. 1, a video object tracking method according to an embodiment of the present invention may include the following steps:
and S1, acquiring the video information of the current scene in real time, and performing object recognition on each frame of image in the video information to obtain a recognition result. Wherein the recognition result is represented by a rectangular frame.
Specifically, the process of performing object recognition on each frame of image in the video information may include: image preprocessing, feature extraction and object identification. The captured image may have noise due to weather, illumination, and surrounding environment, and the image of the target vehicle (the recognition object) needs to be preprocessed (including image grayscale, image enhancement, median filtering, histogram equalization, and the like) to eliminate most of noise interference. Then, feature extraction is performed to extract key points or key lines with obvious features on the vehicle so as to obtain a recognition result (for example, recognition of a vehicle type). The specific identification method in the prior art can be adopted, and is not described in detail here.
In practical applications, since the shape of the main contour of the rigid object is not changed, in order to facilitate identification of the identification object, a horizontal straight line is made for the key points in the extracted vehicle contour, so as to obtain rectangular block diagrams of the identification object, as shown in fig. 2 to 5, where fig. 2 to 5 take 2 tracking objects as an example.
And S2, calculating the coincidence degree between the identification objects of two adjacent frames according to the identification result corresponding to the previous frame image and the identification result corresponding to the current frame image, and acquiring a tracking value according to the coincidence degree.
S3, the tracking value is determined.
And S4, when the tracking value is larger than the first tracking threshold value, judging that the identification object tracking is effective. The first tracking threshold value can be calibrated according to actual conditions.
According to one embodiment of the present invention, the degree of coincidence can be calculated according to the following formula (1):
degree of coincidence =α×S 3 /S 1 +(1-α)×S 3 /S 2 (1)
Wherein the content of the first and second substances,S 1 the area of the rectangular frame corresponding to the previous frame image,S 2 is the area of the rectangular frame corresponding to the current frame image,S 3 the area of the overlapped part of the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image,αis constant and 0 is less than or equal toα≤1。
Further, according to an embodiment of the present invention, updating the tracking value according to the degree of coincidence includes: judging the contact ratio, searching whether an identification object exists in a preset tracking object list, and if the contact ratio is greater than a contact ratio threshold value and the identification object is in the tracking object list, accumulating on the basis of an initial tracking value until the tracking value reaches a preset limit value; if the contact ratio is less than or equal to the contact ratio threshold value and the identification object is in the tracking object list, decreasing the initial tracking value on the basis until the tracking value is equal to zero, and deleting the entry corresponding to the identification object in the tracking object list; and if the contact ratio is greater than the contact ratio threshold value and the identification object is not in the tracking object list, newly adding an entry in the tracking object list, and setting the tracking value of the newly added entry to be one. The contact ratio threshold value and the preset limit value can be calibrated according to actual conditions.
Specifically, when the object tracking is started, the tracking object list, the previous frame object list and the current frame object list are initialized, then the current frame image is subjected to object identification to obtain an identification result, the identification result is recorded in the current frame object list, and the coincidence degree between the identification objects of two latest adjacent frames is calculated according to the identification result corresponding to the previous frame image and the identification corresponding to the current frame image, wherein the coincidence degree is a weighted average value of the ratio of the area of the two rectangular coincident parts on the plane to the area of the two rectangles, and can be calculated according to the formula (1).
After the coincidence degree is calculated, the coincidence degree is determined, and if the coincidence degree of the two rectangular frames is greater than the coincidence degree threshold value, the recognition results included in the two rectangular frames are regarded as the same object and the object is trackable, for example, when the target vehicle (recognition object) is close to the vehicle, as shown in fig. 2 and 3, the rectangular frame corresponding to the first frame image is S101, the rectangular frame corresponding to the second frame image is S201, and the coincidence degree is very large and can be used as a tracking basis; the rectangular frame corresponding to the previous frame image is S102, the rectangular frame corresponding to the current frame image is S202, and the coincidence degree is very high and can be used as a tracking basis. For another example, when the target vehicle (recognition target) is far from the host vehicle, as shown in fig. 4 and 5, the rectangular frame corresponding to the n-th frame image is S301, and the rectangular frame corresponding to the n + 1-th frame image is S401, although the overlapping area is much smaller than that of S101 and S201 or S102 and S202, the overlapping ratio between two adjacent frames (the n-th frame and the n + 1-th frame) is large, and it can be used as a tracking basis.
Searching whether an identification object exists in a preset tracking object list, if the identification object is in the tracking object list, accumulating a tracking value, and if the tracking value exceeds a preset limit value, setting the tracking value as the preset limit value; and if the identification object is not in the tracking object list, adding an entry in the tracking object list, and setting the tracking value of the added entry to 1. If the coincidence degree of the two rectangular frames is less than or equal to the coincidence degree threshold value and the identification object is in the tracking object list, the tracking value is decreased, and if the tracking value is equal to 0, the entry is deleted in the tracking object list, for example, after the target vehicle (identification object) is occluded, the occluded target vehicle on the video image has no identification frame, and the latest two frames of video images have no matching result related to the occluded target vehicle, so that each time one frame of video image is acquired, the tracking value corresponding to the identification object in the tracking list is decreased by 1, and the entry is deleted when the tracking value is decreased to 0.
And after obtaining a tracking value according to the contact ratio, judging the tracking value, wherein when the tracking value is greater than a first tracking threshold value, the identification object is identified and tracked effectively. Therefore, the method can adapt to the recognized objects with different sizes by comparing the relative coincidence degree, can adapt to the object recognition and tracking of a detection scene with a large range, and effectively avoids the problems that a short-distance object and a long-distance object need to be considered when the absolute value of the coincidence area is compared, so that the judgment threshold value is too small when the short-distance object is tracked, and the judgment threshold value is too large when the long-distance object is tracked.
In order to improve the accuracy of tracking the identification object, according to an embodiment of the present invention, after the identification object is effectively tracked, the motion direction and the motion trajectory of the identification object are further obtained according to the identification result corresponding to the previous frame image and the identification result corresponding to the current frame image, the disappearance direction of the identification object is pre-determined according to the motion direction and the motion trajectory of the identification object, and the identification object is judged to disappear when the pre-determination is established and the tracking value is smaller than the second tracking threshold value. And the second tracking threshold value can be calibrated according to actual conditions.
According to an embodiment of the present invention, acquiring a motion direction and a motion trajectory of an identification object according to an identification result corresponding to a previous frame image and an identification result corresponding to a current frame image includes: and connecting the center of the rectangular frame corresponding to the previous frame image with the center of the rectangular frame corresponding to the current frame image to obtain the motion track of the identified object. When an included angle between a central connecting line of a rectangular frame corresponding to a current frame image and a rectangular frame corresponding to the current frame image and the front view direction of the video collector is larger than a first preset angle and the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image are distributed left and right, identifying that the motion direction of an object is transverse; and when the included angle between the central connecting line of the rectangular frame corresponding to the current frame image and the front view direction of the video collector is smaller than a second preset angle, identifying the motion direction of the object as the longitudinal direction. The second preset angle is smaller than the first preset angle, and the first preset angle and the second preset angle can be calibrated according to actual conditions.
Further, according to an embodiment of the present invention, the predicting a disappearing direction of the recognition object according to the moving direction and the moving trajectory of the recognition object includes: when the motion direction of the identification object is transverse, if the motion track of the identification object is moved to a preset video edge area, pre-judging is established; when the motion direction of the identification object is longitudinal, if the motion track of the identification object has moved to a preset vanishing line and the size of a rectangular frame corresponding to the current frame image is a preset minimum identification frame, prejudging that the motion track is vertical; and when the motion direction of the identification object is longitudinal, if the size of the rectangular frame corresponding to the current frame image is a preset maximum identification frame, the prejudgment is true.
Specifically, after the recognition object tracking is judged to be effective, the running direction and the running track of the recognition object can be obtained, and the disappearance direction of the recognition object is judged in advance according to the running direction and the running track, so that the confidence coefficient of the object tracking is increased.
As shown in fig. 2 to 5, the first frame image and the second frame image, and the nth frame and the (n + 1) th frame image are taken as examples. And connecting the center of the rectangular frame corresponding to the first frame image with the center of the rectangular frame corresponding to the second frame image to obtain the moving track of the identified object. When the included angle between the central connecting line of the two rectangular frames and the front-view direction of the video collector is relatively large (for example, the included angle is larger than a first preset angle), and the two rectangular frames are distributed left and right (that is, the rectangular frame corresponding to the first frame image is on the left, the rectangular frame corresponding to the second frame image is on the right, or the rectangular frame corresponding to the first frame image is on the right, and the rectangular frame corresponding to the second frame image is on the left), the motion direction of the identified object is transverse.
When an included angle between a central line of the two rectangular frames and the front view direction of the video collector is relatively small (for example, the included angle is smaller than a second preset angle), the motion direction of the identified object is longitudinal, for example, as shown in fig. 2 and 3, the rectangular frame corresponding to the first frame image is S101, the rectangular frame corresponding to the second frame image is S201, or the rectangular frame corresponding to the first frame image is S102, and the rectangular frame corresponding to the second frame image is S202. For another example, as shown in fig. 4 and 5, the rectangular frame corresponding to the first frame image is S301, the rectangular frame corresponding to the second frame image is S401, or the rectangular frame corresponding to the first frame image is S302, and the rectangular frame corresponding to the second frame image is S402.
After the movement direction and the movement track of the tracked vehicle (identification object) are obtained, the information is recorded in a tracking object list (relative to the vehicle) so as to improve the tracking accuracy and provide a basis for pre-judging the disappearance direction of the tracked vehicle. When the motion direction of the recognition object is horizontal, if the motion track of the recognition object has moved to a preset video edge area, it is determined that the recognition object may turn right or left or turn around. When the motion direction of the identification object is longitudinal, if the motion track of the identification object has moved to a preset vanishing line and the size of the rectangular frame corresponding to the current frame image is a preset minimum identification frame, prejudging that the identification object is vertical, for example, a target vehicle (the identification object) increases the speed and approaches the preset vanishing line; if the size of the rectangular frame corresponding to the current frame is the preset maximum recognition frame, the prediction is established, for example, the vehicle speed is increased by the vehicle and exceeds the target vehicle (recognition object).
And when the prejudgment is met, judging the tracking value, judging that the identified object disappears if the tracking value is smaller than a second tracking threshold value, and deleting the corresponding entry in the tracking object list when the identified object disappears. And identifying the object by occlusion, and judging that the object disappears when the tracking value of the object is decreased to 0. Therefore, the motion situation of the recognition object is pre-judged according to the motion track and the motion direction by recording the motion track and the motion direction of the recognition object (relative to the front view direction of the video collector), so that the confidence degree of object tracking is increased.
Therefore, the video object tracking method of the embodiment of the invention can be applied to object identification and tracking of a detection scene with a large range by reasonably selecting the threshold values of object tracking at different distances, and simultaneously records the motion tracks and the motion directions of different tracked objects, thereby improving the tracking accuracy.
In summary, according to the video object tracking method of the embodiment of the present invention, the video information of the current scene is obtained in real time, the object identification is performed on each frame of image in the video information to obtain the identification result, the coincidence degree between the identification objects of two adjacent frames is calculated according to the identification result corresponding to the previous frame of image and the identification result corresponding to the current frame of image, the tracking value is updated according to the coincidence degree, the tracking value is determined, and when the tracking value is greater than the first tracking threshold, it is determined that the identification object tracking is effective. The method can be applied to object identification and tracking of a detection scene with a large range by comparing the coincidence degree of the identified objects of two continuous frames of videos.
In correspondence with the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-mentioned video object tracking method.
The non-transitory computer readable storage medium of the embodiment of the present invention can adapt to object recognition and tracking of a detection scene with a large range by comparing the coincidence degree of recognized objects of two consecutive frames of video by executing the stored computer program.
In addition, the invention also provides a video object tracking device, which comprises a memory, a processor and a video object tracking program which is stored on the memory and can run on the processor, wherein the control program of the video object tracking device realizes the steps of the video object tracking method when being executed by the processor.
The video object tracking device provided by the embodiment of the invention can be applied to object identification and tracking of a detection scene with a large range by comparing the coincidence degree of the identified objects of two continuous frames of videos.
Corresponding to the above embodiment, the present invention further provides a vehicle, which includes the above video object tracking apparatus.
According to the vehicle provided by the embodiment of the invention, through the video object tracking device, the coincidence degree of the identified objects of two continuous frames of videos is compared, and the object identification and tracking of a detection scene with a large application range can be realized.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are not to be considered limiting of the invention.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (8)
1. A video object tracking method, comprising the steps of:
acquiring video information of a current scene in real time, and performing object identification on each frame of image in the video information to obtain an identification result;
calculating the contact ratio between the identification objects of two adjacent frames according to the identification result corresponding to the previous frame image and the identification result corresponding to the current frame image, and updating a tracking value according to the contact ratio, wherein the identification result is represented by an identification frame, and the contact ratio is calculated according to the following formula:
degree of coincidence =α×S 3 /S 1 +(1-α)×S 3 /S 2 ,
Wherein the content of the first and second substances,S 1 the area of the identification frame corresponding to the previous frame image,S 2 the area of the identification frame corresponding to the current frame image,S 3 the area of the overlapped part of the identification frame corresponding to the previous frame image and the identification frame corresponding to the current frame image is determined,αis constant and 0 is less than or equal toα≤1;
Judging the tracking value;
when the tracking value is larger than a first tracking threshold value, judging that the tracking of the identification object is effective;
wherein updating the tracking value according to the degree of coincidence comprises:
judging the contact ratio, and searching whether the identification object exists in a preset tracking object list;
if the contact ratio is greater than a contact ratio threshold value and the identification object is in a tracking object list, accumulating on the basis of an initial tracking value until the tracking value reaches a preset limit value;
if the contact ratio is less than or equal to a contact ratio threshold value and the identification object is in a tracking object list, decreasing the initial tracking value on the basis until the tracking value is equal to zero, and deleting the entry corresponding to the identification object in the tracking object list;
and if the contact ratio is greater than a contact ratio threshold value and the identification object is not in the tracking object list, newly adding an item in the tracking object list and setting the tracking value of the newly added item to be one.
2. The video object tracking method according to claim 1, wherein after the tracking of the identification object is enabled, the motion direction and the motion trajectory of the identification object are further obtained according to the identification result corresponding to the previous frame image and the identification result corresponding to the current frame image, the disappearance direction of the identification object is pre-determined according to the motion direction and the motion trajectory of the identification object, and the identification object is determined to disappear when the pre-determination is made and the tracking value is smaller than a second tracking threshold value.
3. The video object tracking method of claim 2, wherein the recognition result is represented by a rectangular box.
4. The video object tracking method according to claim 3, wherein obtaining the motion direction and the motion trajectory of the identified object according to the identification result corresponding to the previous frame image and the identification result corresponding to the current frame image comprises:
connecting the center of the rectangular frame corresponding to the previous frame image with the center of the rectangular frame corresponding to the current frame image to obtain the motion track of the identification object;
when an included angle between a central connecting line of the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image and the front view direction of the video collector is larger than a first preset angle, and the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image are distributed left and right, the motion direction of the identification object is transverse;
when an included angle between a central connecting line of the rectangular frame corresponding to the previous frame image and the rectangular frame corresponding to the current frame image and the front view direction of the video collector is smaller than a second preset angle, the motion direction of the identification object is longitudinal, wherein the second preset angle is smaller than the first preset angle.
5. The video object tracking method of claim 4, wherein predicting a direction of disappearance of the identified object based on the direction and trajectory of motion of the identified object comprises:
when the motion direction of the identification object is transverse, if the motion track of the identification object is moved to a preset video edge area, prejudging to be true;
when the motion direction of the identification object is longitudinal, if the motion track of the identification object has moved to a preset vanishing line and the size of the rectangular frame corresponding to the current frame image is a preset minimum identification frame, prejudging to be true;
and when the motion direction of the identification object is longitudinal, if the size of the rectangular frame corresponding to the current frame image is a preset maximum identification frame, prejudging to be true.
6. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the video object tracking method according to any one of claims 1 to 5.
7. A video object tracking apparatus comprising a memory, a processor and a video object tracking program stored on the memory and executable on the processor, the control program of the video object tracking apparatus when executed by the processor implementing the steps of the video object tracking method as claimed in any one of claims 1 to 5.
8. A vehicle comprising the video object tracking apparatus of claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711243752.0A CN109859240B (en) | 2017-11-30 | 2017-11-30 | Video object tracking method and device and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711243752.0A CN109859240B (en) | 2017-11-30 | 2017-11-30 | Video object tracking method and device and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109859240A CN109859240A (en) | 2019-06-07 |
CN109859240B true CN109859240B (en) | 2021-06-18 |
Family
ID=66888813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711243752.0A Active CN109859240B (en) | 2017-11-30 | 2017-11-30 | Video object tracking method and device and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109859240B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348460B (en) * | 2019-07-04 | 2021-10-22 | 成都旷视金智科技有限公司 | Angle-based target detection training method, target detection method and device |
CN110969647B (en) * | 2019-12-04 | 2023-06-30 | 苏州智加科技有限公司 | Method for integrating identification tracking and car lamp detection of vehicle |
CN111046820A (en) * | 2019-12-17 | 2020-04-21 | 上海船舶研究设计院(中国船舶工业集团公司第六0四研究院) | Statistical method and device for vehicles in automobile roll-on-roll-off ship and intelligent terminal |
CN111079675A (en) * | 2019-12-23 | 2020-04-28 | 武汉唯理科技有限公司 | Driving behavior analysis method based on target detection and target tracking |
CN111259754A (en) * | 2020-01-10 | 2020-06-09 | 中国海洋大学 | End-to-end plankton database construction system and method |
CN112037257B (en) * | 2020-08-20 | 2023-09-29 | 浙江大华技术股份有限公司 | Target tracking method, terminal and computer readable storage medium thereof |
CN112017319B (en) * | 2020-08-21 | 2022-03-25 | 中建二局第一建筑工程有限公司 | Intelligent patrol security method, device and system and storage medium |
CN114582140B (en) * | 2022-01-17 | 2023-04-18 | 浙江银江智慧交通工程技术研究院有限公司 | Method, system, device and medium for identifying traffic flow of urban road intersection |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103259962A (en) * | 2013-04-17 | 2013-08-21 | 深圳市捷顺科技实业股份有限公司 | Target tracking method and related device |
CN104217417A (en) * | 2013-05-31 | 2014-12-17 | 张伟伟 | A video multiple-target tracking method and device |
CN105551063A (en) * | 2016-01-29 | 2016-05-04 | 中国农业大学 | Method and device for tracking moving object in video |
CN106682619A (en) * | 2016-12-28 | 2017-05-17 | 上海木爷机器人技术有限公司 | Object tracking method and device |
CN106846374A (en) * | 2016-12-21 | 2017-06-13 | 大连海事大学 | The track calculating method of vehicle under multi-cam scene |
CN107292297A (en) * | 2017-08-09 | 2017-10-24 | 电子科技大学 | A kind of video car flow quantity measuring method tracked based on deep learning and Duplication |
CN107316322A (en) * | 2017-06-27 | 2017-11-03 | 上海智臻智能网络科技股份有限公司 | Video tracing method and device and object identifying method and device |
CN107358621A (en) * | 2016-05-10 | 2017-11-17 | 腾讯科技(深圳)有限公司 | Method for tracing object and device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5106271B2 (en) * | 2008-06-27 | 2012-12-26 | キヤノン株式会社 | Image processing apparatus, image processing method, and computer program |
JP5959951B2 (en) * | 2012-06-15 | 2016-08-02 | キヤノン株式会社 | Video processing apparatus, video processing method, and program |
US9760791B2 (en) * | 2015-09-01 | 2017-09-12 | Sony Corporation | Method and system for object tracking |
-
2017
- 2017-11-30 CN CN201711243752.0A patent/CN109859240B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103259962A (en) * | 2013-04-17 | 2013-08-21 | 深圳市捷顺科技实业股份有限公司 | Target tracking method and related device |
CN104217417A (en) * | 2013-05-31 | 2014-12-17 | 张伟伟 | A video multiple-target tracking method and device |
CN105551063A (en) * | 2016-01-29 | 2016-05-04 | 中国农业大学 | Method and device for tracking moving object in video |
CN107358621A (en) * | 2016-05-10 | 2017-11-17 | 腾讯科技(深圳)有限公司 | Method for tracing object and device |
CN106846374A (en) * | 2016-12-21 | 2017-06-13 | 大连海事大学 | The track calculating method of vehicle under multi-cam scene |
CN106682619A (en) * | 2016-12-28 | 2017-05-17 | 上海木爷机器人技术有限公司 | Object tracking method and device |
CN107316322A (en) * | 2017-06-27 | 2017-11-03 | 上海智臻智能网络科技股份有限公司 | Video tracing method and device and object identifying method and device |
CN107292297A (en) * | 2017-08-09 | 2017-10-24 | 电子科技大学 | A kind of video car flow quantity measuring method tracked based on deep learning and Duplication |
Non-Patent Citations (5)
Title |
---|
A comparison of tracking algorithm performance for objects in wide area imagery;Rohit C. Philip等;《2014 Southwest Symposium on Image Analysis and Interpretation》;20140501;109-112 * |
Homography based multiple camera detection and tracking of people in a dense crowd;Ran Eshel等;《2008 IEEE Conference on Computer Vision and Pattern Recognition》;20080805;1-8 * |
一种应用于交通环境中的运动车辆跟踪方法;甘玲等;《重庆邮电大学学报》;20130630;第25卷(第3期);408-411 * |
基于在线学习的非特定目标长时间跟踪技术研究;付苗;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315;第2017年卷(第3期);第4.1节 * |
智能交通监控系统中运动目标的检测与预警技术;黄德宝;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130215;第2013年卷(第2期);I138-1779 * |
Also Published As
Publication number | Publication date |
---|---|
CN109859240A (en) | 2019-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109859240B (en) | Video object tracking method and device and vehicle | |
US11500101B2 (en) | Curb detection by analysis of reflection images | |
US9818301B2 (en) | Lane correction system, lane correction apparatus and method of correcting lane | |
JP6670071B2 (en) | Vehicle image recognition system and corresponding method | |
KR101609303B1 (en) | Method to calibrate camera and apparatus therefor | |
JP6362442B2 (en) | Lane boundary line extraction device, lane boundary line extraction method, and program | |
CN109001757B (en) | Parking space intelligent detection method based on 2D laser radar | |
US9747507B2 (en) | Ground plane detection | |
JP2012512446A (en) | Method and apparatus for identifying obstacles in an image | |
US20120257056A1 (en) | Image processing apparatus, image processing method, and image processing program | |
KR102069843B1 (en) | Apparatus amd method for tracking vehicle | |
WO2014002692A1 (en) | Stereo camera | |
CN111047615A (en) | Image-based line detection method and device and electronic equipment | |
CN111028169B (en) | Image correction method, device, terminal equipment and storage medium | |
EP3584763A1 (en) | Vehicle-mounted environment recognition device | |
CN110537206B (en) | Railway track recognition device, program, and railway track recognition method | |
CN109492454B (en) | Object identification method and device | |
JP5189556B2 (en) | Lane detection device | |
JP2013069045A (en) | Image recognition device, image recognition method, and image recognition program | |
JP2019218022A (en) | Rail track detection device | |
CN107255470B (en) | Obstacle detection device | |
JP3226415B2 (en) | Automotive mobile object recognition device | |
JP4151631B2 (en) | Object detection device | |
JP7064400B2 (en) | Object detection device | |
WO2020036039A1 (en) | Stereo camera device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |