CN114299109A - Multi-target object track generation method, system, electronic equipment and storage medium - Google Patents

Multi-target object track generation method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN114299109A
CN114299109A CN202111468098.XA CN202111468098A CN114299109A CN 114299109 A CN114299109 A CN 114299109A CN 202111468098 A CN202111468098 A CN 202111468098A CN 114299109 A CN114299109 A CN 114299109A
Authority
CN
China
Prior art keywords
target object
tracked
video
video data
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111468098.XA
Other languages
Chinese (zh)
Inventor
詹瑾
岳振猛
赵慧民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN202111468098.XA priority Critical patent/CN114299109A/en
Publication of CN114299109A publication Critical patent/CN114299109A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of video tracking, in particular to a multi-target object track generation method, a multi-target object track generation system, electronic equipment and a storage medium. The method comprises the following steps: acquiring video data containing multiple target objects; performing framing processing on the acquired video data, selecting a blank video frame picture as a background, determining a target object to be tracked in the framed video frame picture, and generating a position frame of the target object to be tracked; labeling corresponding position frames according to the tracking characteristic points of the target object to be tracked, and calculating the central point of each position frame; and acquiring center point coordinates corresponding to the multiple target objects in the continuous video frames, mapping the center point coordinates to corresponding video data according to the time sequence, and generating a multiple target object movement track result in the video data. The method solves the problem that the multi-target tracking is difficult to generate the motion track, generates the continuous central point track in the video and avoids the situation that the multi-target object is lost or the track is wrong.

Description

Multi-target object track generation method, system, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of video tracking, in particular to a multi-target object track generation method, a multi-target object track generation system, electronic equipment and a storage medium.
Background
With the continuous promotion of smart city construction and the continuous upgrade of security monitoring, a large-scale video monitoring network in the national sense is formed preliminarily. Meanwhile, one of the most direct consequences of large-scale surveillance video is the generation of huge amounts of video data. The massive video data provides a huge challenge to the traditional intelligent video analysis technical means.
When analyzing and processing videos collected by a monitoring camera, one of the very important tasks is to capture and track a plurality of targets in a dynamically captured monitoring video. For tracking of each target, the existing processing method is a single-target tracking technology, the method is simple, and when the processing targets are increased, the processing targets need to be separately and respectively processed, so that the difficulty of processing is increased. Especially, when a corresponding movement track needs to be generated for a multi-target object, the existing processing algorithm cannot meet the actual application requirement.
Disclosure of Invention
In order to solve the problem of tracking and generating corresponding movement tracks for multiple target objects, the invention provides a method, a system, electronic equipment and a storage medium for generating the multiple target object tracks.
In order to achieve the above purpose, the embodiment of the present invention provides the following technical solutions:
in a first aspect, in an embodiment provided by the present invention, a multi-target object trajectory generation method is provided, including:
acquiring video data containing multiple target objects;
performing framing processing on the acquired video data, selecting a blank video frame picture as a background, determining a target object to be tracked in the framed video frame picture, and generating a position frame of the target object to be tracked;
labeling corresponding position frames according to the tracking characteristic points of the target object to be tracked, and calculating the central point of each position frame;
and acquiring center point coordinates corresponding to the multiple target objects in the continuous video frames, mapping the center point coordinates to corresponding video data according to the time sequence, and generating a multiple target object movement track result in the video data.
In some embodiments provided herein, determining a target object to be tracked includes:
selecting blank video frames as backgrounds, and comparing the blank video frames with the video frames one by one;
identifying a foreground area in a current video frame by using a background subtraction method to obtain a foreground image block in the current video frame;
and dividing the foreground image block to determine a plurality of targets to be tracked.
In some embodiments provided herein, determining a plurality of targets to be tracked includes:
obtaining a foreground image block in a current video frame;
identifying local feature points in the foreground image block by adopting an image identification algorithm;
and acquiring a difference result of global features in the foreground image block according to the local feature points, and dividing the foreground image block according to the difference result to obtain a plurality of targets to be tracked.
In some embodiments provided by the present invention, the image recognition algorithm of the local feature point includes: the method comprises the steps of a spot detection algorithm and an angular point detection algorithm, wherein the spot detection algorithm comprises the steps of detecting a Gaussian Laplacian Operator (LOG), detecting a pixel point Hessian matrix and detecting a determinant value (DOH); the corner detection algorithm comprises Harris corner feature extraction and FAST corner feature extraction.
In some embodiments provided by the present invention, the generating of the position frame of the target object to be tracked is to generate an outline of the foreground image block of the target object to be tracked, which has color and gray level differences with the surroundings, according to two types of local feature points, which are the speckle and the corner points, and the generated outline forms the position frame.
In some embodiments provided herein, further comprising:
dividing the obtained foreground image blocks of a plurality of targets to be tracked respectively, dividing the foreground image blocks into a plurality of small blocks, and calculating the histogram numerical value and the LBP characteristic numerical value of each small block to form a high-dimensional vector representing the small blocks;
and capturing the target to be tracked according to the high-dimensional vectors of the small blocks.
In some embodiments provided by the present invention, the position frames for generating the contour are labeled according to different high-dimensional vectors corresponding to the image blocks where the multiple target objects are located, and the labels of the position frames corresponding to the same target object in consecutive video frames are the same.
In some embodiments provided herein, a method for calculating a center point of each location box includes:
generating the outline of the target object to be tracked according to the spots and the angular points;
and selecting scattered points from the generated contour, drawing diagonal lines of a scattered point graph, and determining the central point of the position frame corresponding to the target object to be tracked.
In a second aspect, in another embodiment provided by the present invention, a multi-target object trajectory generation system is provided, which generates a multi-target object movement trajectory in video data by using the multi-target object trajectory generation method; the multi-target object track generation system comprises a target object to be tracked identification module, a framing module, a central point calculation module and a track generation module.
The target object to be tracked identification module is used for framing the acquired video data, selecting a blank video frame picture as a background, and determining the target object to be tracked in the framed video frame picture by adopting a background subtraction method;
the framing module is used for generating a contour of the target object to be tracked according to the local feature points, and the generated contour forms a position frame to obtain the position frame of the target object to be tracked;
the central point calculation module is used for labeling the tracking characteristic points of the target object to be tracked corresponding to the position frames, selecting scattered points on the outline to form a geometric figure, and calculating the central points of the geometric figures corresponding to the position frames; and
and the track generation module is used for mapping the center point coordinates corresponding to the multi-target objects in the obtained continuous video frames to the corresponding video data according to the time sequence to generate a multi-target object moving track result in the video data.
In a third aspect, in yet another embodiment provided by the present invention, an electronic device is provided, which includes a memory storing a computer program and a processor, wherein the processor implements the steps of the multi-target object trajectory generation method when the computer program is loaded and executed.
In a fourth aspect, in a further embodiment provided by the present invention, there is provided a storage medium storing a computer program which is loaded into and executed by a processor to implement the steps of the multi-target object trajectory generation method.
The technical scheme provided by the invention has the following beneficial effects:
the multi-target object track generation method, the system, the electronic equipment and the storage medium provided by the invention have the advantages that the multi-target object of each video frame is determined and the position frame is configured according to the specified blank video frame as the reference frame aiming at providing video data, the central point and the coordinates of the multi-target object are determined and mapped to the video data to form the continuous frame central point mark, the multi-target object moving track result is formed in the video data, the problem that the multi-target object is difficult to track and generate a motion track can be effectively solved, the continuous central point track can be generated in the video after the multi-target object is identified and tracked, the situation that the multi-target object is lost or the track is wrong is avoided, and the accuracy of the multi-target object track generation is improved.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention. In the drawings:
fig. 1 is a flowchart of a multi-target object trajectory generation method according to an embodiment of the present invention.
Fig. 2 is a flowchart of determining a target object to be tracked in a multi-target object trajectory generation method according to an embodiment of the present invention.
Fig. 3 is a flowchart of determining a plurality of targets to be tracked in a multi-target object trajectory generation method according to an embodiment of the present invention.
Fig. 4 is a flowchart of calculating a center point of a position frame in the multi-target object trajectory generation method according to the embodiment of the present invention.
Fig. 5 is a schematic diagram of determining a geometric center point of a target object in a multi-target object trajectory generation method according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of determining a geometric center point of another target object in the multi-target object trajectory generation method according to the embodiment of the present invention.
FIG. 7 is a system block diagram of a multi-target object trajectory generation system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the exemplary embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the exemplary embodiments of the present invention, and it is apparent that the described exemplary embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Because the prior art for capturing and tracking the target in the video usually tracks a single target, the method is simple, and when the processing target is added, the processing target needs to be separately and respectively processed, so that the difficulty of processing is increased. And therefore, the corresponding movement track cannot be generated for the multi-target object in the video.
In view of the above problems, the present invention provides a method, a system, an electronic device and a storage medium for generating a multi-target object trajectory, which perform video tracking in a plurality of identified target images and output a multi-target object movement trajectory result.
Specifically, the embodiments of the present application will be further explained below with reference to the drawings.
As shown in fig. 1, an embodiment of the present invention provides a multi-target object trajectory generation method, including the following steps:
s1, video data containing the multi-target object is obtained, framing processing is carried out on the obtained video data, a blank video frame picture is selected as a background, the target object to be tracked in the framed video frame picture is determined, and a position frame of the target object to be tracked is generated.
And S2, labeling the corresponding position frames according to the tracking characteristic points of the target object to be tracked, and calculating the central points of the position frames.
And S3, acquiring center point coordinates corresponding to the multiple target objects in the continuous video frames, mapping the center point coordinates to corresponding video data according to the time sequence, and generating a multiple target object movement track result in the video data.
In the embodiment, continuous frames of video data are acquired in a frame-dividing manner, and a video frame which does not contain any target object in a shot picture is taken as a background and is compared with other video frames to determine a target object to be tracked in other video frames; then, the central point of each target object to be tracked is determined, the central point coordinates of each target object to be tracked in the continuous video frames are mapped into the video data, and the respective movement track results of the multiple target objects are formed in the video data.
During processing, after a blank video frame picture is selected, in the process of video data acquisition, comparison is carried out on an acquired current video frame, each target object to be tracked in the current video frame is determined, the position and the coordinates of a central point are calculated, and a moving track result of a continuous frame is formed in video data together with the central point of a continuous frame picture before the current video frame.
In step S1 of the present invention, referring to fig. 2, the method for determining the target object to be tracked includes:
s101, selecting blank video frames as backgrounds and comparing the blank video frames with the video frames one by one;
s102, identifying a foreground region in the current video frame by using a background subtraction method to obtain a foreground image block in the current video frame;
s103, dividing the foreground image blocks to determine a plurality of targets to be tracked.
In this embodiment, when the foreground region is identified by using a background subtraction method, a blank video frame is selected as a reference frame, and the reference frame is compared with other video frames one by one to distinguish the foreground and the background of the video frame image. In the embodiment of the invention, a background subtraction method is used for carrying out foreground region detection on the image and carrying out morphological processing to obtain a foreground image block.
Preferably, as an implementation manner, referring to fig. 3, the determining a plurality of targets to be tracked in step S103 includes the following steps:
s1031, obtaining a foreground image block in the current video frame;
s1032, identifying local feature points in the foreground image block by adopting an image identification algorithm;
and S1033, obtaining a difference result of the global features in the foreground image block according to the local feature points, and dividing the foreground image block according to the difference result to obtain a plurality of targets to be tracked.
In this embodiment, local feature points in the foreground image block are identified, and the foreground image block is divided according to the difference of the local feature points, so as to obtain a plurality of targets to be tracked. The principle is specifically that local feature point identification is carried out through spots and corner points. The method has good stability, and is not easily interfered by external environment when the characteristic points of the current frame are identified.
In this embodiment of the present invention, the local feature points may include color features, texture features, shape features, local feature points, and the like. And aiming at two types of local feature points, namely a spot point and a corner point. Speckle generally refers to an area that is differentiated in color and gray from the surroundings, such as a pedestrian or a vehicle on a road. It is an area, so it has stronger noise capability and better stability than the corner points. And the corner points are the intersections between the corners or lines of an object on one side in the image.
When local feature point identification of a spot and an angular point is carried out, an image identification algorithm comprises a spot detection algorithm and an angular point detection algorithm, wherein the spot detection algorithm comprises Gaussian Laplacian operator detection (LOG), pixel point Hessian matrix and determinant value (DOH) detection; the corner detection algorithm comprises Harris corner feature extraction and FAST corner feature extraction.
Preferably, as an implementation manner, the LOG and DOH algorithm is used for the blob detection, and the blob detection method mainly includes a method using laplacian gaussian (LOG) and a method using Hessian matrix (second order differential) of the pixel points and its determinant value (DOH). The LoG method is to detect image spots by using a Gaussian of Gaussian (LoG) operator. During detection, the convolution operation of the image and a certain two-dimensional function actually obtains the similarity between the image and the function. Similarly, the convolution of the image and the laplacian of gaussian function actually finds the similarity between the image and the laplacian of gaussian function. The laplacian response of the image is maximized when the spot size in the image closely conforms to the shape of the laplacian of gaussian function. Since the laplacian kernel of a two-dimensional gaussian function closely resembles a blob, the structure of the blob in the image can be found by convolution. The DoH method uses the second order differential Hessian matrix of the image point and the value of the determinant of the DoH (determinant of Hessian) Hessian matrix, and also reflects the local structure information of the image. Compared to LoG, DoH has a better suppression of speckle of elongated structures in the image.
Preferably, as an implementation manner, the generating the position frame of the target object to be tracked is to generate an outline of the foreground image block of the target object to be tracked, which has color and gray level differences with the surroundings according to two types of local feature points, where the speckle and the corner point are used as the local feature points, and the generated outline forms the position frame.
As an implementable embodiment of the present invention, the multi-target object trajectory generation method further includes:
dividing the obtained foreground image blocks of a plurality of targets to be tracked respectively, dividing the foreground image blocks into a plurality of small blocks, and calculating the histogram numerical value and the LBP characteristic numerical value of each small block to form a high-dimensional vector representing the small blocks;
and capturing the target to be tracked according to the high-dimensional vectors of the small blocks.
In this embodiment, after the respective foreground image blocks of the multiple targets to be tracked are separated, each foreground image block is continuously divided into multiple small blocks, and the histogram value and the LBP feature value of each small block are calculated to form a high-dimensional vector representing the small blocks, so as to improve the tracking accuracy and the identification efficiency for identifying the multiple targets to be tracked.
Preferably, as an implementation manner, in step S2, referring to fig. 4, the method for calculating the center point of each position frame includes the following steps:
s201, generating a contour of a target object to be tracked according to the spots and the corner points;
s201, selecting scattered points from the generated contour, drawing diagonal lines of scattered point patterns, and determining the central point of a position frame corresponding to the target object to be tracked.
In this embodiment, the scatter points are selected from the generated contour according to a set number, the step lengths between adjacent scatter points are equal, a pattern for selecting the scatter points is formed along the contour, and the geometric center point and the coordinates of the scatter point pattern are determined. The contour length of each target object is obtained, the selected scattered points are marked as scattered points according to the equal step length, then the scattered points are used as vertexes of the geometric figure, optionally, as an implementation mode, as shown in fig. 5 and 6, the set number of the scattered points is 4, the selected four scattered points are used as vertexes to form a quadrangle, the diagonal scattered points are connected, the intersection position is used as the geometric center point of the target object, and coordinates corresponding to the geometric center point are marked.
In step S3, the geometric center point obtained from each video frame is mapped into the video data according to its coordinates, and the center points in the consecutive frames form a moving trajectory result corresponding to each target object in the video. Multiple targets may be tracked, distinguished by the label of the location box.
The method aims at providing video data, determines the multi-target object of each video frame and configures a position frame according to a specified blank video frame as a reference frame, determines the central point and the coordinates of the multi-target object, maps the central point and the coordinates to the video data to form a continuous frame central point mark, and forms a multi-target object movement track result in the video data, thereby effectively solving the problem that the multi-target object is difficult to track and generate a movement track, generating a continuous central point track in the video after the multi-target object is identified and tracked, avoiding the situation of multi-target object tracking loss or track error, and improving the accuracy of the multi-target object track generation.
In an embodiment of the present invention, referring to fig. 6, the present invention further discloses a multi-target object trajectory generation system, which generates a multi-target object movement trajectory in video data by using the multi-target object trajectory generation method; the multi-target object track generation system comprises a target object to be tracked identification module 100, a framing module 200, a central point calculation module 300 and a track generation module 400.
The target object to be tracked identification module 100 is configured to perform framing processing on the acquired video data, select a blank video frame image as a background, and determine a target object to be tracked in the framed video frame image by using a background subtraction method.
After selecting the blank video frame picture, the method can compare the current video frame in the video data acquisition process, determine each target object to be tracked in the current video frame, calculate the position and the coordinates of a central point, and form a moving track result of a continuous frame in the video data together with the central point of the continuous frame picture before the current video frame.
In this embodiment, when the target object to be tracked is determined by the target object to be tracked identification module 100, blank video frames are selected as a background and are compared with the video frames one by one; identifying a foreground area in a current video frame by using a background subtraction method to obtain a foreground image block in the current video frame; and dividing the foreground image block to determine a plurality of targets to be tracked.
In this embodiment, when the target object to be tracked identification module 100 determines a plurality of targets to be tracked, a foreground image block in a current video frame is obtained; identifying local feature points in the foreground image block by adopting an image identification algorithm; and acquiring a difference result of global features in the foreground image block according to the local feature points, and dividing the foreground image block according to the difference result to obtain a plurality of targets to be tracked.
And identifying local feature points in the foreground image blocks, and dividing the foreground image blocks according to the difference of the local feature points to obtain a plurality of targets to be tracked. The principle is specifically that local feature point identification is carried out through spots and corner points. The method has good stability, and is not easily interfered by external environment when the characteristic points of the current frame are identified.
The framing module 200 is configured to generate a contour of the target object to be tracked according to the local feature points, where the generated contour forms a position frame, and a position frame of the target object to be tracked is obtained.
Identifying local feature points by using spots and corner points as local feature points, and identifying the local feature points by using a spot detection algorithm and a corner point detection algorithm which comprise an image identification algorithm, wherein the spot detection algorithm comprises a Gaussian Laplacian operator detection (LOG), a pixel point Hessian matrix and determinant value (DOH) detection; the corner detection algorithm comprises Harris corner feature extraction and FAST corner feature extraction.
The LOG and DOH algorithm is adopted during spot detection, and the spot detection method mainly comprises a method (LOG) utilizing Gaussian Laplace operator detection and a method (DOH) utilizing a pixel point Hessian matrix (second order differential) and a determinant value thereof. The LoG method is to detect image spots by using a Gaussian of Gaussian (LoG) operator.
The framing module 200 generates a contour of the foreground image block of the target object to be tracked, which has color and gray level differences with the surrounding, by using the position frame for generating the target object to be tracked, wherein the position frame is two types of local feature points according to the spots and the corner points, and the generated contour forms the position frame.
The central point calculating module 300 is configured to label the tracking feature points of the target object to be tracked with corresponding position frames, select scattered points on the contour to form a geometric figure, and calculate a central point of the geometric figure corresponding to each position frame.
In this embodiment, when the central point calculating module 300 calculates the central point of each position frame, the contour of the target object to be tracked is generated according to the spots and the corner points; and selecting scattered points from the generated contour, drawing diagonal lines of a scattered point graph, and determining the central point of the position frame corresponding to the target object to be tracked.
Optionally, the scatter points are selected from the generated contour according to a set number, the step lengths between adjacent scatter points are equal, a pattern for selecting the scatter points is formed along the contour, and the geometric center point and the coordinates of the scatter point pattern are determined.
The track generating module 400 is configured to map the coordinates of the central point corresponding to the multiple target objects in the acquired continuous video frames to corresponding video data according to a time sequence, and generate a result of the moving track of the multiple target objects in the video data.
In this embodiment, the geometric center point obtained from each video frame is mapped into the video data according to its coordinates, and the center points in the consecutive frames form a moving trajectory result corresponding to each target object in the video. Multiple targets may be tracked, distinguished by the label of the location box.
It should be noted that, the multi-target object trajectory generation system adopts the steps of the multi-target object trajectory generation method as described above when executing, and therefore, the operation process of the multi-target object trajectory generation system in this embodiment is not described in detail.
In one embodiment, an electronic device is further provided in an embodiment of the present invention, and includes at least one processor, and a memory communicatively connected to the at least one processor, where the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to cause the at least one processor to execute the multi-target object trajectory generation method, and the processor executes the instructions to implement the steps in the method embodiments:
acquiring video data containing multiple target objects;
performing framing processing on the acquired video data, selecting a blank video frame picture as a background, determining a target object to be tracked in the framed video frame picture, and generating a position frame of the target object to be tracked;
labeling corresponding position frames according to the tracking characteristic points of the target object to be tracked, and calculating the central point of each position frame;
and acquiring center point coordinates corresponding to the multiple target objects in the continuous video frames, mapping the center point coordinates to corresponding video data according to the time sequence, and generating a multiple target object movement track result in the video data.
In an embodiment of the present invention, an electronic device is further provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the foregoing method embodiments when executing the computer program:
acquiring video data containing multiple target objects;
performing framing processing on the acquired video data, selecting a blank video frame picture as a background, determining a target object to be tracked in the framed video frame picture, and generating a position frame of the target object to be tracked;
labeling corresponding position frames according to the tracking characteristic points of the target object to be tracked, and calculating the central point of each position frame;
and acquiring center point coordinates corresponding to the multiple target objects in the continuous video frames, mapping the center point coordinates to corresponding video data according to the time sequence, and generating a multiple target object movement track result in the video data.
In an embodiment of the present invention, a storage medium is also provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory.
In summary, the multi-target object trajectory generation method, system, electronic device and storage medium provided by the present invention, for providing video data, determine a multi-target object of each video frame and configure a position frame according to a specified blank video frame as a reference frame, determine a central point and coordinates of the multi-target object, map the multi-target object to the video data to form a continuous frame central point mark, and form a multi-target object movement trajectory result in the video data, so as to effectively solve the problem that it is difficult to track the multi-target and generate a movement trajectory, generate a continuous central point trajectory in the video after the multi-target object is identified and tracked, avoid the occurrence of a multi-target object tracking loss or a trajectory error, and improve the accuracy of multi-target object trajectory generation.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A multi-target object trajectory generation method includes:
acquiring video data containing multiple target objects;
performing framing processing on the acquired video data, selecting a blank video frame picture as a background, determining a target object to be tracked in the framed video frame picture, and generating a position frame of the target object to be tracked;
labeling corresponding position frames according to the tracking characteristic points of the target object to be tracked, and calculating the central point of each position frame;
and acquiring center point coordinates corresponding to the multiple target objects in the continuous video frames, mapping the center point coordinates to corresponding video data according to the time sequence, and generating a multiple target object movement track result in the video data.
2. The multi-target object trajectory generation method of claim 1, wherein: determining a target object to be tracked, comprising:
selecting blank video frames as backgrounds, and comparing the blank video frames with the video frames one by one;
identifying a foreground area in a current video frame by using a background subtraction method to obtain a foreground image block in the current video frame;
and dividing the foreground image block to determine a plurality of targets to be tracked.
3. The multi-target object trajectory generation method of claim 2, wherein: determining a plurality of targets to be tracked, comprising:
obtaining a foreground image block in a current video frame;
identifying local feature points in the foreground image block by adopting an image identification algorithm;
and acquiring a difference result of global features in the foreground image block according to the local feature points, and dividing the foreground image block according to the difference result to obtain a plurality of targets to be tracked.
4. The multi-target object trajectory generation method of claim 3, wherein: the image recognition algorithm of the local feature points comprises the following steps: the method comprises the steps of a spot detection algorithm and an angular point detection algorithm, wherein the spot detection algorithm comprises the steps of detecting a Gaussian Laplacian operator, detecting a pixel point Hessian matrix and detecting a determinant value; the corner detection algorithm comprises Harris corner feature extraction and FAST corner feature extraction.
5. The multi-target object trajectory generation method of claim 4, wherein: the position frame for generating the target object to be tracked is a position frame formed by generating an outline with color and gray difference between a foreground image block of the target object to be tracked and the periphery according to two types of local feature points of spots and corner points.
6. The multi-target object trajectory generation method of claim 5, wherein: further comprising:
dividing the obtained foreground image blocks of a plurality of targets to be tracked respectively, dividing the foreground image blocks into a plurality of small blocks, and calculating the histogram numerical value and the LBP characteristic numerical value of each small block to form a high-dimensional vector representing the small blocks;
and capturing the target to be tracked according to the high-dimensional vectors of the small blocks.
7. A multi-target object trajectory generation system, characterized by: the multi-target object trajectory generation system generates a multi-target object movement trajectory in video data by using the multi-target object trajectory generation method of any one of claims 1 to 6; the multi-target object trajectory generation system includes:
the target object to be tracked identification module is used for framing the acquired video data, selecting a blank video frame picture as a background, and determining the target object to be tracked in the framed video frame picture by adopting a background subtraction method;
the framing module is used for generating a contour of the target object to be tracked according to the local feature points, and the generated contour forms a position frame to obtain the position frame of the target object to be tracked;
the central point calculation module is used for labeling the tracking characteristic points of the target object to be tracked corresponding to the position frames, selecting scattered points on the outline to form a geometric figure, and calculating the central points of the geometric figures corresponding to the position frames; and
and the track generation module is used for mapping the center point coordinates corresponding to the multi-target objects in the obtained continuous video frames to the corresponding video data according to the time sequence to generate a multi-target object moving track result in the video data.
8. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein the steps of the method of any one of claims 1 to 6 are implemented when the computer program is loaded and executed by the processor.
9. A storage medium storing a computer program, characterized in that the computer program, when loaded and executed by a processor, implements the steps of the method of any one of claims 1 to 6.
CN202111468098.XA 2021-12-03 2021-12-03 Multi-target object track generation method, system, electronic equipment and storage medium Pending CN114299109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111468098.XA CN114299109A (en) 2021-12-03 2021-12-03 Multi-target object track generation method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111468098.XA CN114299109A (en) 2021-12-03 2021-12-03 Multi-target object track generation method, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114299109A true CN114299109A (en) 2022-04-08

Family

ID=80966429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111468098.XA Pending CN114299109A (en) 2021-12-03 2021-12-03 Multi-target object track generation method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114299109A (en)

Similar Documents

Publication Publication Date Title
Hodan et al. Bop: Benchmark for 6d object pose estimation
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN107358149B (en) Human body posture detection method and device
CN105405154B (en) Target object tracking based on color-structure feature
Tan et al. Robust monocular SLAM in dynamic environments
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
Lim et al. Real-time image-based 6-dof localization in large-scale environments
US9286538B1 (en) Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition
Azad et al. 6-DoF model-based tracking of arbitrarily shaped 3D objects
Azzam et al. Feature-based visual simultaneous localization and mapping: A survey
CN108986152B (en) Foreign matter detection method and device based on difference image
WO2012155121A2 (en) Systems and methods for estimating the geographic location at which image data was captured
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
Yin et al. Removing dynamic 3D objects from point clouds of a moving RGB-D camera
Tombari et al. Stereo for robots: quantitative evaluation of efficient and low-memory dense stereo algorithms
CA2787856A1 (en) Systems and methods for estimating the geographic location at which image data was captured
CN110458177B (en) Method for acquiring image depth information, image processing device and storage medium
CN114299109A (en) Multi-target object track generation method, system, electronic equipment and storage medium
CN111144489B (en) Matching pair filtering method and device, electronic equipment and storage medium
Lugo et al. Semi-supervised learning approach for localization and pose estimation of texture-less objects in cluttered scenes
Bhuvaneswari et al. TRACKING MANUALLY SELECTED OBJECT IN VIDEOS USING COLOR HISTOGRAM MATCHING.
Jang et al. Two-Phase Approach for Monocular Object Detection and 6-DoF Pose Estimation
Bandara Computational geometric-based visual shape tracker
Lee et al. Tracking multiple moving vehicles in low frame rate videos based on trajectory information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination