CN112991396A - Target tracking method and device based on monitoring camera - Google Patents

Target tracking method and device based on monitoring camera Download PDF

Info

Publication number
CN112991396A
CN112991396A CN202110514081.7A CN202110514081A CN112991396A CN 112991396 A CN112991396 A CN 112991396A CN 202110514081 A CN202110514081 A CN 202110514081A CN 112991396 A CN112991396 A CN 112991396A
Authority
CN
China
Prior art keywords
tracked target
target
real
tracked
time monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110514081.7A
Other languages
Chinese (zh)
Other versions
CN112991396B (en
Inventor
吴中山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dimension Data Technology Co Ltd
Original Assignee
Shenzhen Dimension Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dimension Data Technology Co Ltd filed Critical Shenzhen Dimension Data Technology Co Ltd
Priority to CN202110514081.7A priority Critical patent/CN112991396B/en
Publication of CN112991396A publication Critical patent/CN112991396A/en
Application granted granted Critical
Publication of CN112991396B publication Critical patent/CN112991396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a target tracking method and a target tracking device based on a monitoring camera, wherein the method comprises the following steps: performing framing processing on the real-time monitoring video information to obtain real-time monitoring video frame sequence information; performing redundancy removal processing on the real-time monitoring video frame sequence information to obtain real-time monitoring video frame information; performing region extraction processing on the tracked target on the real-time monitoring video frame information to obtain a region image of the tracked target; carrying out occlusion region prediction correction processing on the tracked target region image to obtain a tracked target region correction image; carrying out binarization processing to construct a binarization matrix of the tracked target; and matching the binarization matrix of the tracked target with the binarization matrices of a plurality of angles of a preset tracked target one by one, and confirming whether the tracked target is the preset tracked target or not based on the matching result. In the embodiment of the invention, the target can be identified and tracked in the complex background of video monitoring, and the accuracy is higher.

Description

Target tracking method and device based on monitoring camera
Technical Field
The invention relates to the technical field of video target tracking, in particular to a target tracking method and device based on a monitoring camera.
Background
At present, video monitoring technology is mature day by day, and video monitoring is applied to various scenes, such as household daily monitoring, market daily monitoring, monitoring of various places with complicated people flow and the like; however, in a place with various complex backgrounds and complicated people flows, a monitoring algorithm used for general video monitoring is difficult to capture a monitoring target set by a user and track and monitor the monitoring target; the tracking and monitoring accuracy of the target is low, and the calculation complexity and the required hardware cost of the algorithm with high progress of tracking and monitoring the target in a complex scene are relatively high, so that more application cost needs to be pointed out when a user uses the algorithm.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a target tracking method and device based on a monitoring camera, which can be used for identifying and tracking a target in a complex background of video monitoring, has higher accuracy, and is lower in calculation complexity and lower in hardware requirement.
In order to solve the above technical problem, an embodiment of the present invention provides a target tracking method based on a monitoring camera, where the method includes:
acquiring real-time monitoring video information based on monitoring camera equipment, and performing framing processing on the real-time monitoring video information to acquire real-time monitoring video frame sequence information;
performing redundancy elimination processing on the real-time monitoring video frame sequence information based on preset interval time to obtain real-time monitoring video frame information;
performing region extraction processing on the tracked target on the real-time monitoring video frame information based on an image contour coding algorithm to obtain a region image of the tracked target;
performing occlusion region prediction correction processing on the tracked target region image based on a jigsaw algorithm to obtain a tracked target region correction image;
carrying out binarization processing on the tracked target area correction image, and constructing a binarization matrix of the tracked target based on a binarization processing result;
and matching the binarization matrix of the tracked target with the binarization matrices of a plurality of angles of a preset tracked target one by one, and confirming whether the tracked target is the preset tracked target or not based on the matching result.
Optionally, the performing redundancy elimination processing on the real-time monitoring video frame sequence information based on the preset interval time to obtain the real-time monitoring video frame information includes:
judging whether all frames in the real-time monitoring video frame sequence information have tracked targets or not, and removing the real-time monitoring video frames without the tracked targets to form the initial redundancy removed real-time monitoring video frame sequence information;
and extracting the real-time monitoring video frame information from the real-time monitoring video frame sequence information with the removed initial redundancy according to a preset interval time to obtain the real-time monitoring video frame information.
Optionally, the performing, based on an image contour coding algorithm, region extraction processing on the tracked target on the real-time monitoring video frame information to obtain a tracked target region image includes:
performing target boundary extraction processing on the tracked target in the real-time monitoring video frame information based on an image analysis model to obtain boundary coordinate information of the tracked target;
constructing a boundary coding matrix based on the boundary coordinate information of the tracked target, and obtaining the boundary coding matrix of the tracked target;
carrying out binarization processing on the boundary coding matrix of the tracked target based on a preset threshold value to obtain a binarized boundary coding matrix;
arranging and coding the binarized boundary coding matrix to obtain a coding chain corresponding to the boundary;
and obtaining a tracked target area image based on the coding chain corresponding to the boundary.
Optionally, the constructing a boundary coding matrix based on the boundary coordinate information of the tracked target to obtain the boundary coding matrix of the tracked target includes:
processing the boundary coordinate information of the tracked target to be normalized to [1,255] to obtain a normalization result;
constructing a boundary matrix in a uint8 format based on the normalization result, and obtaining a boundary matrix of the tracked target;
and carrying out graying processing on the boundary matrix of the tracked target, and converting based on a graying processing result to obtain a boundary coding matrix of the tracked target.
Optionally, the arranging and encoding the binarized boundary coding matrix to obtain a coding chain corresponding to the boundary includes:
and sequentially converting the binary boundary coding matrix into a row according to the sequence of each row, converting each column into a row according to the sequence of each column, and sequentially converting each diagonal line data from the upper right corner to the lower left corner into a row for processing to obtain a coding chain corresponding to the boundary.
Optionally, the performing, based on a jigsaw algorithm, occlusion region prediction correction processing on the tracked target region image to obtain a tracked target region corrected image includes:
setting an example segmentation result, and extracting the outline of a segmentation mask region for the tracked target region image based on the example segmentation result;
fitting the contour based on a least square method to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the fitting result;
solving maximum value points from pixel points on the fitting result outer contour segment to the center of the fitting result, and taking the first two maximum value points with the largest distance;
decomposing the contour into two sections of contours by using the first two maximum points with the largest distance, and fitting the two sections of contours by using a least square method to obtain a second fitting result;
repeating the steps to obtain the target shape approximate fitting result of all the segmentation mask areas;
and obtaining a corrected image of the tracked target area based on the target shape approximate fitting result of all the segmentation mask areas.
Optionally, the obtaining a corrected image of the tracked target region based on the target shape approximate fitting result of all the segmentation mask regions includes:
and calculating the overlapping area of each fitting result in the target shape approximate fitting results of all the segmentation mask areas and all the segmentation mask areas, and merging the segmentation mask areas when the segmentation mask areas are contained in the corresponding fitting results to obtain a corrected image of the tracked target area.
Optionally, the performing binarization processing on the tracked target region corrected image, and constructing a binarization matrix of the tracked target based on a binarization processing result includes:
determining an image binarization algorithm based on the gray average value and the standard difference value of the tracked target area correction image;
calculating a threshold value based on the image binarization algorithm to obtain a calculated weighted threshold value;
carrying out binarization processing on the tracked target area correction image based on the weighting threshold value to obtain a binarization processing result;
and constructing a binarization matrix of the tracked target based on the binarization processing result.
Optionally, the one-to-one matching between the binarized matrix based on the tracked target and the binarized matrices of multiple angles of a preset tracked target, and determining whether the tracked target is the preset tracked target based on a matching result, include:
matching each element in the binarized matrix of the tracked target with each element in the binarized matrix of a plurality of angles of a preset tracked target one by one to obtain a matching result of the binarized matrix of each angle of the preset tracked target;
confirming whether the tracked target is a preset tracking target or not based on a matching result of the binarization matrix of each angle of the preset tracking target;
and if any matching result in the matching results of the binarization matrix of each angle with the preset tracking target is greater than the preset matching probability, confirming that the tracked target is the preset tracking target.
In addition, an embodiment of the present invention further provides a target tracking device based on a surveillance camera, where the device includes:
a video framing module: the real-time monitoring video frame processing device is used for acquiring real-time monitoring video information based on monitoring camera equipment, and performing frame processing on the real-time monitoring video information to acquire real-time monitoring video frame sequence information;
a redundancy processing module: the real-time monitoring video frame sequence information processing device is used for carrying out redundancy elimination processing on the real-time monitoring video frame sequence information based on preset interval time to obtain real-time monitoring video frame information;
the region extraction module: the system comprises a tracking target area extracting unit, a tracking target area image acquiring unit and a tracking target area image acquiring unit, wherein the tracking target area extracting unit is used for extracting a tracked target area from real-time monitoring video frame information based on an image contour coding algorithm to obtain a tracked target area image;
a prediction correction module: the system comprises a tracking target area image acquisition unit, a tracking target area correction unit and a storage unit, wherein the tracking target area image acquisition unit is used for acquiring a tracking target area correction image;
a matrix construction module: the binarization matrix is used for carrying out binarization processing on the tracked target area correction image and constructing a binarization matrix of the tracked target based on a binarization processing result;
a matching module: the method is used for matching the binarization matrix of the tracked target with the binarization matrices of a plurality of angles of a preset tracked target one by one, and confirming whether the tracked target is the preset tracked target or not based on the matching result.
In the embodiment of the invention, the method can be used for identifying and tracking the target in the complex background of video monitoring, has higher accuracy, and simultaneously has lower calculation complexity and lower requirement on the dependent hardware; thus reducing the use cost of the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a target tracking method based on a monitoring camera in an embodiment of the present invention;
fig. 2 is a schematic structural composition diagram of a target tracking device based on a monitoring camera in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart of a target tracking method based on a monitoring camera according to an embodiment of the present invention.
As shown in fig. 1, a target tracking method based on a surveillance camera includes:
s11: acquiring real-time monitoring video information based on monitoring camera equipment, and performing framing processing on the real-time monitoring video information to acquire real-time monitoring video frame sequence information;
in the specific implementation process of the invention, when a target needs to be tracked, a monitoring camera and computer equipment or a computing server which is used for tracking and computing the target together with the monitoring camera are needed; the real-time monitoring video information is acquired by real-time video acquisition through the monitoring camera equipment, then the real-time monitoring video information is subjected to framing processing according to the exposure rate of the video acquired by the monitoring camera equipment, and each frame of video frame is marked according to the acquisition sequence, so that the real-time monitoring video frame sequence information is acquired.
When the target tracking monitoring is performed, a preset tracking target needs to be input on a computer or a computing server, target images of a plurality of angles of the preset tracking target are input, and binarization matrixes of the plurality of angles are respectively constructed.
S12: performing redundancy elimination processing on the real-time monitoring video frame sequence information based on preset interval time to obtain real-time monitoring video frame information;
in a specific implementation process of the present invention, the performing redundancy elimination processing on the real-time monitoring video frame sequence information based on the preset interval time to obtain the real-time monitoring video frame information includes: judging whether all frames in the real-time monitoring video frame sequence information have tracked targets or not, and removing the real-time monitoring video frames without the tracked targets to form the initial redundancy removed real-time monitoring video frame sequence information; and extracting the real-time monitoring video frame information from the real-time monitoring video frame sequence information with the removed initial redundancy according to a preset interval time to obtain the real-time monitoring video frame information.
Specifically, foreground extraction is performed on all video frames in real-time monitoring video frame sequence information, whether a tracked target exists in each video frame is judged through the extracted foreground, if the tracked target does not exist in each video frame, the tracked target needs to be removed, and after the video frames without the tracked target are removed, the monitoring video frame sequence information with initial redundancy removal is formed; and then, extracting real-time monitoring video frame information from the monitoring video frame sequence information with the initial redundancy removed according to a preset interval time, thereby obtaining the real-time monitoring video frame information.
Through the processing, firstly, the video frames without the tracked target are removed, the calculation amount in the subsequent calculation is reduced, the calculation efficiency is improved, and then the real-time monitoring video frames are extracted according to the preset interval time, so that the subsequent calculation amount can be further reduced on the premise of ensuring the tracking precision.
S13: performing region extraction processing on the tracked target on the real-time monitoring video frame information based on an image contour coding algorithm to obtain a region image of the tracked target;
in a specific implementation process of the present invention, the performing, based on an image contour coding algorithm, a region extraction process of a tracked target on the real-time monitoring video frame information to obtain a region image of the tracked target includes: performing target boundary extraction processing on the tracked target in the real-time monitoring video frame information based on an image analysis model to obtain boundary coordinate information of the tracked target; constructing a boundary coding matrix based on the boundary coordinate information of the tracked target, and obtaining the boundary coding matrix of the tracked target; carrying out binarization processing on the boundary coding matrix of the tracked target based on a preset threshold value to obtain a binarized boundary coding matrix; arranging and coding the binarized boundary coding matrix to obtain a coding chain corresponding to the boundary; and obtaining a tracked target area image based on the coding chain corresponding to the boundary.
Further, the constructing a boundary coding matrix based on the boundary coordinate information of the tracked target to obtain the boundary coding matrix of the tracked target includes: processing the boundary coordinate information of the tracked target to be normalized to [1,255] to obtain a normalization result; constructing a boundary matrix in a uint8 format based on the normalization result, and obtaining a boundary matrix of the tracked target; and carrying out graying processing on the boundary matrix of the tracked target, and converting based on a graying processing result to obtain a boundary coding matrix of the tracked target.
Further, the arranging and encoding the binarized boundary coding matrix to obtain a coding chain corresponding to the boundary includes: and sequentially converting the binary boundary coding matrix into a row according to the sequence of each row, converting each column into a row according to the sequence of each column, and sequentially converting each diagonal line data from the upper right corner to the lower left corner into a row for processing to obtain a coding chain corresponding to the boundary.
Specifically, target boundary extraction processing is carried out on a tracked target in real-time monitoring video frame information through an image analysis model, boundary coordinate information of the tracked target is obtained, and the boundary coordinate information of the tracked target is (x, y) series coordinates; after obtaining the boundary coordinate information of the tracked target, constructing a boundary coding matrix by using the boundary coordinate information of the tracked target, thereby obtaining the boundary coding matrix of the tracked target; carrying out binarization processing on the boundary coding matrix of the tracked target according to a preset threshold value so as to obtain a binarized boundary coding matrix; and finally, obtaining the tracked target area image through the coding chain corresponding to the boundary for the binarized boundary coding matrix.
When a boundary coding matrix is constructed according to boundary coordinate information of a tracked target, the boundary coordinate information of the tracked target needs to be normalized to be between [1,255], the normalization processing is mainly used for eliminating influences of images caused by different absolute pixel points to obtain a normalization result, and a boundary matrix is constructed according to the normalization result, is in a uint8 format, so that subsequent calculation is facilitated; after the boundary matrix of the tracked target is obtained, graying processing needs to be performed on the boundary matrix, and then conversion is performed according to the graying processing result to obtain the boundary coding matrix of the tracked target.
The permutation coding comprises a horizontal permutation coding mode, a column permutation coding mode and an oblique permutation coding mode, wherein the horizontal permutation coding mode is that each row of the code matrix after binarization is converted into a row according to the sequence. The column-wise coding is to convert each column of the coding matrix into a row according to the sequence. The diagonal encoding is to convert each diagonal line data of the encoding matrix from the upper right corner to the lower left corner into a row.
S14: performing occlusion region prediction correction processing on the tracked target region image based on a jigsaw algorithm to obtain a tracked target region correction image;
in a specific implementation process of the present invention, the performing occlusion region prediction correction processing on the tracked target region image based on a jigsaw algorithm to obtain a tracked target region corrected image includes: setting an example segmentation result, and extracting the outline of a segmentation mask region for the tracked target region image based on the example segmentation result; fitting the contour based on a least square method to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the fitting result; solving maximum value points from pixel points on the fitting result outer contour segment to the center of the fitting result, and taking the first two maximum value points with the largest distance; decomposing the contour into two sections of contours by using the first two maximum points with the largest distance, and fitting the two sections of contours by using a least square method to obtain a second fitting result; repeating the steps to obtain the target shape approximate fitting result of all the segmentation mask areas; and obtaining a corrected image of the tracked target area based on the target shape approximate fitting result of all the segmentation mask areas.
Further, the obtaining a corrected image of the tracked target region based on the target shape approximate fitting result of all the segmentation mask regions includes: and calculating the overlapping area of each fitting result in the target shape approximate fitting results of all the segmentation mask areas and all the segmentation mask areas, and merging the segmentation mask areas when the segmentation mask areas are contained in the corresponding fitting results to obtain a corrected image of the tracked target area.
Specifically, firstly, an example segmentation result is set, and then the contour of a segmentation mask area is extracted from the tracked target area image according to the example segmentation result, so as to obtain the contour of the segmentation mask area; fitting the contour by using a least square method so as to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the distance; then, solving maximum value points from pixel points on the outer contour segment of the fitting result to the center of the fitting result, and taking the first two maximum value points with the largest distance; then, decomposing the contour into two sections of contours by utilizing the first two maximum points with the largest distance, and fitting the two sections of contours by adopting a least square method to obtain a second fitting result; repeating the above process until the target shape approximate fitting result of all the segmentation mask areas is obtained; and then obtaining a corrected image of the tracked target area according to the target shape approximate fitting result of all the segmentation mask areas.
And calculating the overlapping area of each fitting result in the target shape approximate fitting results of all the segmentation mask areas and all the segmentation mask areas, and merging the segmentation mask areas when the segmentation mask areas are contained in the corresponding fitting results to obtain a corrected image of the tracked target area.
S15: carrying out binarization processing on the tracked target area correction image, and constructing a binarization matrix of the tracked target based on a binarization processing result;
in a specific implementation process of the present invention, the performing binarization processing on the corrected image of the tracked target region, and constructing a binarization matrix of the tracked target based on a binarization processing result includes: determining an image binarization algorithm based on the gray average value and the standard difference value of the tracked target area correction image; calculating a threshold value based on the image binarization algorithm to obtain a calculated weighted threshold value; carrying out binarization processing on the tracked target area correction image based on the weighting threshold value to obtain a binarization processing result; and constructing a binarization matrix of the tracked target based on the binarization processing result.
Specifically, the tracked target area correction image comprises an original image and a correction supplement image, the gray mean value and the standard difference value of the tracked target area correction image need to be calculated, then a relevant area is defined as the original image or the correction supplement image according to the gray mean value and the standard difference value, and then an image binarization algorithm is determined according to the original image or the correction supplement image; then, according to the determined binarization threshold values of all pixels of the original image part, giving a global threshold value, and determining to give a weighted global threshold value and a weighted local threshold value to the binarization threshold values of all pixels of the corrected supplementary image part, namely, a weighted threshold value obtained by carrying out threshold value weighting processing according to the global threshold value and the local threshold value; then, carrying out binarization processing on the tracked target correction image according to a related threshold value to obtain a binarization processing result; and constructing a binarization matrix of the tracked target according to the binarization processing result.
S16: and matching the binarization matrix of the tracked target with the binarization matrices of a plurality of angles of a preset tracked target one by one, and confirming whether the tracked target is the preset tracked target or not based on the matching result.
In a specific implementation process of the present invention, the matching the binarized matrix based on the tracked target with the binarized matrices of multiple angles of a preset tracked target one by one, and determining whether the tracked target is the preset tracked target based on a matching result, includes: matching each element in the binarized matrix of the tracked target with each element in the binarized matrix of a plurality of angles of a preset tracked target one by one to obtain a matching result of the binarized matrix of each angle of the preset tracked target; confirming whether the tracked target is a preset tracking target or not based on a matching result of the binarization matrix of each angle of the preset tracking target; and if any matching result in the matching results of the binarization matrix of each angle with the preset tracking target is greater than the preset matching probability, confirming that the tracked target is the preset tracking target.
Specifically, each element in the binarized matrix of the tracked target is matched with each element in the binarized matrix of a plurality of angles of a preset tracked target one by one, then a matching result of the binarized matrix of each angle of the preset tracked target is obtained, and then whether the tracked target is the preset tracked target is determined according to the matching result of the binarized matrix of each angle of the preset tracked target; and if any matching result in the matching results of the binarization matrix of each angle with the preset tracking target is greater than the preset matching probability, determining that the tracked target is the preset tracking target.
In the embodiment of the invention, the method can be used for identifying and tracking the target in the complex background of video monitoring, has higher accuracy, and simultaneously has lower calculation complexity and lower requirement on the dependent hardware; thus reducing the use cost of the user.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of a target tracking device based on a monitoring camera according to an embodiment of the present invention.
As shown in fig. 2, an object tracking apparatus based on a surveillance camera includes:
the video framing module 21: the real-time monitoring video frame processing device is used for acquiring real-time monitoring video information based on monitoring camera equipment, and performing frame processing on the real-time monitoring video information to acquire real-time monitoring video frame sequence information;
in the specific implementation process of the invention, when a target needs to be tracked, a monitoring camera and computer equipment or a computing server which is used for tracking and computing the target together with the monitoring camera are needed; the real-time monitoring video information is acquired by real-time video acquisition through the monitoring camera equipment, then the real-time monitoring video information is subjected to framing processing according to the exposure rate of the video acquired by the monitoring camera equipment, and each frame of video frame is marked according to the acquisition sequence, so that the real-time monitoring video frame sequence information is acquired.
When the target tracking monitoring is performed, a preset tracking target needs to be input on a computer or a computing server, target images of a plurality of angles of the preset tracking target are input, and binarization matrixes of the plurality of angles are respectively constructed.
The redundant processing module 22: the real-time monitoring video frame sequence information processing device is used for carrying out redundancy elimination processing on the real-time monitoring video frame sequence information based on preset interval time to obtain real-time monitoring video frame information;
in a specific implementation process of the present invention, the performing redundancy elimination processing on the real-time monitoring video frame sequence information based on the preset interval time to obtain the real-time monitoring video frame information includes: judging whether all frames in the real-time monitoring video frame sequence information have tracked targets or not, and removing the real-time monitoring video frames without the tracked targets to form the initial redundancy removed real-time monitoring video frame sequence information; and extracting the real-time monitoring video frame information from the real-time monitoring video frame sequence information with the removed initial redundancy according to a preset interval time to obtain the real-time monitoring video frame information.
Specifically, foreground extraction is performed on all video frames in real-time monitoring video frame sequence information, whether a tracked target exists in each video frame is judged through the extracted foreground, if the tracked target does not exist in each video frame, the tracked target needs to be removed, and after the video frames without the tracked target are removed, the monitoring video frame sequence information with initial redundancy removal is formed; and then, extracting real-time monitoring video frame information from the monitoring video frame sequence information with the initial redundancy removed according to a preset interval time, thereby obtaining the real-time monitoring video frame information.
Through the processing, firstly, the video frames without the tracked target are removed, the calculation amount in the subsequent calculation is reduced, the calculation efficiency is improved, and then the real-time monitoring video frames are extracted according to the preset interval time, so that the subsequent calculation amount can be further reduced on the premise of ensuring the tracking precision.
The region extraction module 23: the system comprises a tracking target area extracting unit, a tracking target area image acquiring unit and a tracking target area image acquiring unit, wherein the tracking target area extracting unit is used for extracting a tracked target area from real-time monitoring video frame information based on an image contour coding algorithm to obtain a tracked target area image;
in a specific implementation process of the present invention, the performing, based on an image contour coding algorithm, a region extraction process of a tracked target on the real-time monitoring video frame information to obtain a region image of the tracked target includes: performing target boundary extraction processing on the tracked target in the real-time monitoring video frame information based on an image analysis model to obtain boundary coordinate information of the tracked target; constructing a boundary coding matrix based on the boundary coordinate information of the tracked target, and obtaining the boundary coding matrix of the tracked target; carrying out binarization processing on the boundary coding matrix of the tracked target based on a preset threshold value to obtain a binarized boundary coding matrix; arranging and coding the binarized boundary coding matrix to obtain a coding chain corresponding to the boundary; and obtaining a tracked target area image based on the coding chain corresponding to the boundary.
Further, the constructing a boundary coding matrix based on the boundary coordinate information of the tracked target to obtain the boundary coding matrix of the tracked target includes: processing the boundary coordinate information of the tracked target to be normalized to [1,255] to obtain a normalization result; constructing a boundary matrix in a uint8 format based on the normalization result, and obtaining a boundary matrix of the tracked target; and carrying out graying processing on the boundary matrix of the tracked target, and converting based on a graying processing result to obtain a boundary coding matrix of the tracked target.
Further, the arranging and encoding the binarized boundary coding matrix to obtain a coding chain corresponding to the boundary includes: and sequentially converting the binary boundary coding matrix into a row according to the sequence of each row, converting each column into a row according to the sequence of each column, and sequentially converting each diagonal line data from the upper right corner to the lower left corner into a row for processing to obtain a coding chain corresponding to the boundary.
Specifically, target boundary extraction processing is carried out on a tracked target in real-time monitoring video frame information through an image analysis model, boundary coordinate information of the tracked target is obtained, and the boundary coordinate information of the tracked target is (x, y) series coordinates; after obtaining the boundary coordinate information of the tracked target, constructing a boundary coding matrix by using the boundary coordinate information of the tracked target, thereby obtaining the boundary coding matrix of the tracked target; carrying out binarization processing on the boundary coding matrix of the tracked target according to a preset threshold value so as to obtain a binarized boundary coding matrix; and finally, obtaining the tracked target area image through the coding chain corresponding to the boundary for the binarized boundary coding matrix.
When a boundary coding matrix is constructed according to boundary coordinate information of a tracked target, the boundary coordinate information of the tracked target needs to be normalized to be between [1,255], the normalization processing is mainly used for eliminating influences of images caused by different absolute pixel points to obtain a normalization result, and a boundary matrix is constructed according to the normalization result, is in a uint8 format, so that subsequent calculation is facilitated; after the boundary matrix of the tracked target is obtained, graying processing needs to be performed on the boundary matrix, and then conversion is performed according to the graying processing result to obtain the boundary coding matrix of the tracked target.
The permutation coding comprises a horizontal permutation coding mode, a column permutation coding mode and an oblique permutation coding mode, wherein the horizontal permutation coding mode is that each row of the code matrix after binarization is converted into a row according to the sequence. The column-wise coding is to convert each column of the coding matrix into a row according to the sequence. The diagonal encoding is to convert each diagonal line data of the encoding matrix from the upper right corner to the lower left corner into a row.
The prediction correction module 24: the system comprises a tracking target area image acquisition unit, a tracking target area correction unit and a storage unit, wherein the tracking target area image acquisition unit is used for acquiring a tracking target area correction image;
in a specific implementation process of the present invention, the performing occlusion region prediction correction processing on the tracked target region image based on a jigsaw algorithm to obtain a tracked target region corrected image includes: setting an example segmentation result, and extracting the outline of a segmentation mask region for the tracked target region image based on the example segmentation result; fitting the contour based on a least square method to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the fitting result; solving maximum value points from pixel points on the fitting result outer contour segment to the center of the fitting result, and taking the first two maximum value points with the largest distance; decomposing the contour into two sections of contours by using the first two maximum points with the largest distance, and fitting the two sections of contours by using a least square method to obtain a second fitting result; repeating the steps to obtain the target shape approximate fitting result of all the segmentation mask areas; and obtaining a corrected image of the tracked target area based on the target shape approximate fitting result of all the segmentation mask areas.
Further, the obtaining a corrected image of the tracked target region based on the target shape approximate fitting result of all the segmentation mask regions includes: and calculating the overlapping area of each fitting result in the target shape approximate fitting results of all the segmentation mask areas and all the segmentation mask areas, and merging the segmentation mask areas when the segmentation mask areas are contained in the corresponding fitting results to obtain a corrected image of the tracked target area.
Specifically, firstly, an example segmentation result is set, and then the contour of a segmentation mask area is extracted from the tracked target area image according to the example segmentation result, so as to obtain the contour of the segmentation mask area; fitting the contour by using a least square method so as to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the distance; then, solving maximum value points from pixel points on the outer contour segment of the fitting result to the center of the fitting result, and taking the first two maximum value points with the largest distance; then, decomposing the contour into two sections of contours by utilizing the first two maximum points with the largest distance, and fitting the two sections of contours by adopting a least square method to obtain a second fitting result; repeating the above process until the target shape approximate fitting result of all the segmentation mask areas is obtained; and then obtaining a corrected image of the tracked target area according to the target shape approximate fitting result of all the segmentation mask areas.
And calculating the overlapping area of each fitting result in the target shape approximate fitting results of all the segmentation mask areas and all the segmentation mask areas, and merging the segmentation mask areas when the segmentation mask areas are contained in the corresponding fitting results to obtain a corrected image of the tracked target area.
The matrix construction module 25: the binarization matrix is used for carrying out binarization processing on the tracked target area correction image and constructing a binarization matrix of the tracked target based on a binarization processing result;
in a specific implementation process of the present invention, the performing binarization processing on the corrected image of the tracked target region, and constructing a binarization matrix of the tracked target based on a binarization processing result includes: determining an image binarization algorithm based on the gray average value and the standard difference value of the tracked target area correction image; calculating a threshold value based on the image binarization algorithm to obtain a calculated weighted threshold value; carrying out binarization processing on the tracked target area correction image based on the weighting threshold value to obtain a binarization processing result; and constructing a binarization matrix of the tracked target based on the binarization processing result.
Specifically, the tracked target area correction image comprises an original image and a correction supplement image, the gray mean value and the standard difference value of the tracked target area correction image need to be calculated, then a relevant area is defined as the original image or the correction supplement image according to the gray mean value and the standard difference value, and then an image binarization algorithm is determined according to the original image or the correction supplement image; then, according to the determined binarization threshold values of all pixels of the original image part, giving a global threshold value, and determining to give a weighted global threshold value and a weighted local threshold value to the binarization threshold values of all pixels of the corrected supplementary image part, namely, a weighted threshold value obtained by carrying out threshold value weighting processing according to the global threshold value and the local threshold value; then, carrying out binarization processing on the tracked target correction image according to a related threshold value to obtain a binarization processing result; and constructing a binarization matrix of the tracked target according to the binarization processing result.
The matching module 26: the method is used for matching the binarization matrix of the tracked target with the binarization matrices of a plurality of angles of a preset tracked target one by one, and confirming whether the tracked target is the preset tracked target or not based on the matching result.
In a specific implementation process of the present invention, the matching the binarized matrix based on the tracked target with the binarized matrices of multiple angles of a preset tracked target one by one, and determining whether the tracked target is the preset tracked target based on a matching result, includes: matching each element in the binarized matrix of the tracked target with each element in the binarized matrix of a plurality of angles of a preset tracked target one by one to obtain a matching result of the binarized matrix of each angle of the preset tracked target; confirming whether the tracked target is a preset tracking target or not based on a matching result of the binarization matrix of each angle of the preset tracking target; and if any matching result in the matching results of the binarization matrix of each angle with the preset tracking target is greater than the preset matching probability, confirming that the tracked target is the preset tracking target.
Specifically, each element in the binarized matrix of the tracked target is matched with each element in the binarized matrix of a plurality of angles of a preset tracked target one by one, then a matching result of the binarized matrix of each angle of the preset tracked target is obtained, and then whether the tracked target is the preset tracked target is determined according to the matching result of the binarized matrix of each angle of the preset tracked target; and if any matching result in the matching results of the binarization matrix of each angle with the preset tracking target is greater than the preset matching probability, determining that the tracked target is the preset tracking target.
In the embodiment of the invention, the method can be used for identifying and tracking the target in the complex background of video monitoring, has higher accuracy, and simultaneously has lower calculation complexity and lower requirement on the dependent hardware; thus reducing the use cost of the user.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
In addition, the target tracking method and device based on the monitoring camera provided by the embodiment of the invention are described in detail, a specific embodiment is adopted herein to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A target tracking method based on a monitoring camera is characterized by comprising the following steps:
acquiring real-time monitoring video information based on monitoring camera equipment, and performing framing processing on the real-time monitoring video information to acquire real-time monitoring video frame sequence information;
performing redundancy elimination processing on the real-time monitoring video frame sequence information based on preset interval time to obtain real-time monitoring video frame information;
performing region extraction processing on the tracked target on the real-time monitoring video frame information based on an image contour coding algorithm to obtain a region image of the tracked target;
performing occlusion region prediction correction processing on the tracked target region image based on a jigsaw algorithm to obtain a tracked target region correction image;
carrying out binarization processing on the tracked target area correction image, and constructing a binarization matrix of the tracked target based on a binarization processing result;
and matching the binarization matrix of the tracked target with the binarization matrices of a plurality of angles of a preset tracked target one by one, and confirming whether the tracked target is the preset tracked target or not based on the matching result.
2. The target tracking method according to claim 1, wherein the performing redundancy elimination processing on the real-time monitoring video frame sequence information based on the preset interval time to obtain real-time monitoring video frame information comprises:
judging whether all frames in the real-time monitoring video frame sequence information have tracked targets or not, and removing the real-time monitoring video frames without the tracked targets to form the initial redundancy removed real-time monitoring video frame sequence information;
and extracting the real-time monitoring video frame information from the real-time monitoring video frame sequence information with the removed initial redundancy according to a preset interval time to obtain the real-time monitoring video frame information.
3. The target tracking method according to claim 1, wherein the image contour coding algorithm-based region extraction processing of the tracked target is performed on the real-time monitoring video frame information to obtain a tracked target region image, and the method comprises:
performing target boundary extraction processing on the tracked target in the real-time monitoring video frame information based on an image analysis model to obtain boundary coordinate information of the tracked target;
constructing a boundary coding matrix based on the boundary coordinate information of the tracked target, and obtaining the boundary coding matrix of the tracked target;
carrying out binarization processing on the boundary coding matrix of the tracked target based on a preset threshold value to obtain a binarized boundary coding matrix;
arranging and coding the binarized boundary coding matrix to obtain a coding chain corresponding to the boundary;
and obtaining a tracked target area image based on the coding chain corresponding to the boundary.
4. The target tracking method of claim 3, wherein the constructing a boundary coding matrix based on the boundary coordinate information of the tracked target to obtain the boundary coding matrix of the tracked target comprises:
processing the boundary coordinate information of the tracked target to be normalized to [1,255] to obtain a normalization result;
constructing a boundary matrix in a uint8 format based on the normalization result, and obtaining a boundary matrix of the tracked target;
and carrying out graying processing on the boundary matrix of the tracked target, and converting based on a graying processing result to obtain a boundary coding matrix of the tracked target.
5. The target tracking method according to claim 3, wherein the arranging and encoding the binarized boundary coding matrix to obtain a coding chain corresponding to a boundary comprises:
and sequentially converting the binary boundary coding matrix into a row according to the sequence of each row, converting each column into a row according to the sequence of each column, and sequentially converting each diagonal line data from the upper right corner to the lower left corner into a row for processing to obtain a coding chain corresponding to the boundary.
6. The target tracking method according to claim 1, wherein the performing occlusion region prediction correction processing on the tracked target region image based on a jigsaw algorithm to obtain a tracked target region corrected image comprises:
setting an example segmentation result, and extracting the outline of a segmentation mask region for the tracked target region image based on the example segmentation result;
fitting the contour based on a least square method to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the fitting result;
solving maximum value points from pixel points on the fitting result outer contour segment to the center of the fitting result, and taking the first two maximum value points with the largest distance;
decomposing the contour into two sections of contours by using the first two maximum points with the largest distance, and fitting the two sections of contours by using a least square method to obtain a second fitting result;
repeating the steps to obtain the target shape approximate fitting result of all the segmentation mask areas;
and obtaining a corrected image of the tracked target area based on the target shape approximate fitting result of all the segmentation mask areas.
7. The target tracking method of claim 6, wherein said obtaining a corrected image of the tracked target region based on the target shape approximate fit of all the segmentation mask regions comprises:
and calculating the overlapping area of each fitting result in the target shape approximate fitting results of all the segmentation mask areas and all the segmentation mask areas, and merging the segmentation mask areas when the segmentation mask areas are contained in the corresponding fitting results to obtain a corrected image of the tracked target area.
8. The target tracking method according to claim 1, wherein the binarizing processing the tracked target region corrected image and constructing a binarized matrix of the tracked target based on a result of the binarizing processing includes:
determining an image binarization algorithm based on the gray average value and the standard difference value of the tracked target area correction image;
calculating a threshold value based on the image binarization algorithm to obtain a calculated weighted threshold value;
carrying out binarization processing on the tracked target area correction image based on the weighting threshold value to obtain a binarization processing result;
and constructing a binarization matrix of the tracked target based on the binarization processing result.
9. The target tracking method according to claim 1, wherein the matching the binarized matrix based on the tracked target with the binarized matrices of multiple angles of a preset tracking target one by one, and the confirming whether the tracked target is the preset tracking target based on the matching result comprises:
matching each element in the binarized matrix of the tracked target with each element in the binarized matrix of a plurality of angles of a preset tracked target one by one to obtain a matching result of the binarized matrix of each angle of the preset tracked target;
confirming whether the tracked target is a preset tracking target or not based on a matching result of the binarization matrix of each angle of the preset tracking target;
and if any matching result in the matching results of the binarization matrix of each angle with the preset tracking target is greater than the preset matching probability, confirming that the tracked target is the preset tracking target.
10. An object tracking device based on a surveillance camera, the device comprising:
a video framing module: the real-time monitoring video frame processing device is used for acquiring real-time monitoring video information based on monitoring camera equipment, and performing frame processing on the real-time monitoring video information to acquire real-time monitoring video frame sequence information;
a redundancy processing module: the real-time monitoring video frame sequence information processing device is used for carrying out redundancy elimination processing on the real-time monitoring video frame sequence information based on preset interval time to obtain real-time monitoring video frame information;
the region extraction module: the system comprises a tracking target area extracting unit, a tracking target area image acquiring unit and a tracking target area image acquiring unit, wherein the tracking target area extracting unit is used for extracting a tracked target area from real-time monitoring video frame information based on an image contour coding algorithm to obtain a tracked target area image;
a prediction correction module: the system comprises a tracking target area image acquisition unit, a tracking target area correction unit and a storage unit, wherein the tracking target area image acquisition unit is used for acquiring a tracking target area correction image;
a matrix construction module: the binarization matrix is used for carrying out binarization processing on the tracked target area correction image and constructing a binarization matrix of the tracked target based on a binarization processing result;
a matching module: the method is used for matching the binarization matrix of the tracked target with the binarization matrices of a plurality of angles of a preset tracked target one by one, and confirming whether the tracked target is the preset tracked target or not based on the matching result.
CN202110514081.7A 2021-05-12 2021-05-12 Target tracking method and device based on monitoring camera Active CN112991396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110514081.7A CN112991396B (en) 2021-05-12 2021-05-12 Target tracking method and device based on monitoring camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110514081.7A CN112991396B (en) 2021-05-12 2021-05-12 Target tracking method and device based on monitoring camera

Publications (2)

Publication Number Publication Date
CN112991396A true CN112991396A (en) 2021-06-18
CN112991396B CN112991396B (en) 2021-08-27

Family

ID=76337583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110514081.7A Active CN112991396B (en) 2021-05-12 2021-05-12 Target tracking method and device based on monitoring camera

Country Status (1)

Country Link
CN (1) CN112991396B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116338579A (en) * 2022-11-08 2023-06-27 杭州昊恒科技有限公司 Positioning deviation rectifying method for personnel management

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584269A (en) * 2018-10-17 2019-04-05 龙马智芯(珠海横琴)科技有限公司 A kind of method for tracking target
CN111932583A (en) * 2020-06-05 2020-11-13 西安羚控电子科技有限公司 Space-time information integrated intelligent tracking method based on complex background
CN112489076A (en) * 2020-12-06 2021-03-12 北京工业大学 Multi-target tracking method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584269A (en) * 2018-10-17 2019-04-05 龙马智芯(珠海横琴)科技有限公司 A kind of method for tracking target
CN111932583A (en) * 2020-06-05 2020-11-13 西安羚控电子科技有限公司 Space-time information integrated intelligent tracking method based on complex background
CN112489076A (en) * 2020-12-06 2021-03-12 北京工业大学 Multi-target tracking method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116338579A (en) * 2022-11-08 2023-06-27 杭州昊恒科技有限公司 Positioning deviation rectifying method for personnel management
CN116338579B (en) * 2022-11-08 2023-09-12 杭州昊恒科技有限公司 Positioning deviation rectifying method for personnel management

Also Published As

Publication number Publication date
CN112991396B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN106296725B (en) Moving target real-time detection and tracking method and target detection device
CN104063883B (en) A kind of monitor video abstraction generating method being combined based on object and key frame
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
CN109685045B (en) Moving target video tracking method and system
CN105678213B (en) Dual-mode mask person event automatic detection method based on video feature statistics
CN111242128B (en) Object detection method, device, computer readable storage medium and computer equipment
CN112257669A (en) Pedestrian re-identification method and device and electronic equipment
CN109886159B (en) Face detection method under non-limited condition
CN112800825B (en) Key point-based association method, system and medium
CN110276769B (en) Live broadcast content positioning method in video picture-in-picture architecture
CN112614136A (en) Infrared small target real-time instance segmentation method and device
CN113723399A (en) License plate image correction method, license plate image correction device and storage medium
CN112991396B (en) Target tracking method and device based on monitoring camera
CN111160107B (en) Dynamic region detection method based on feature matching
CN110414430B (en) Pedestrian re-identification method and device based on multi-proportion fusion
CN114359333A (en) Moving object extraction method and device, computer equipment and storage medium
CN112528994B (en) Free angle license plate detection method, license plate recognition method and recognition system
CN112330618B (en) Image offset detection method, device and storage medium
CN113052139A (en) Deep learning double-flow network-based climbing behavior detection method and system
CN113228105A (en) Image processing method and device and electronic equipment
CN110135224B (en) Method and system for extracting foreground target of surveillance video, storage medium and terminal
US20220207261A1 (en) Method and apparatus for detecting associated objects
CN114612907A (en) License plate recognition method and device
CN114998930A (en) Heavy-shielding image set generation and heavy-shielding human body target model training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant