CN112857746A - Tracking method and device of lamplight detector, electronic equipment and storage medium - Google Patents

Tracking method and device of lamplight detector, electronic equipment and storage medium Download PDF

Info

Publication number
CN112857746A
CN112857746A CN202011599041.9A CN202011599041A CN112857746A CN 112857746 A CN112857746 A CN 112857746A CN 202011599041 A CN202011599041 A CN 202011599041A CN 112857746 A CN112857746 A CN 112857746A
Authority
CN
China
Prior art keywords
light detector
frame image
current
image
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011599041.9A
Other languages
Chinese (zh)
Inventor
周佳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202011599041.9A priority Critical patent/CN112857746A/en
Publication of CN112857746A publication Critical patent/CN112857746A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a tracking method and a tracking device of a lamplight detector, electronic equipment and a medium, wherein the method comprises the following steps: inputting a previous frame image of the light detector, a first central position of the light detector in the previous frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image; if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position; and if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image. According to the embodiment of the invention, when the tracking object is shielded, the tracking can be still carried out according to the adjacent frames, so that the resource consumption is reduced, and the CPU requirement is reduced.

Description

Tracking method and device of lamplight detector, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a tracking method and device of a light detector, electronic equipment and a storage medium.
Background
In the annual inspection of the vehicle, the light video detection is an important detection item, whether the light detector moves or not is confirmed through the video, and the situation that the light video detection passes though the light detector does not move is prevented. In the prior art, the position of the light detector is located through a detection algorithm, and then tracking is performed by using a conventional Kernel Correlation on Filter (KCF).
When a plurality of objects are tracked by using the KCF algorithm, the consumption of CPU and system resources is overlarge, the tracking object is easy to track and lose by the shielded traditional KCF algorithm, and once the tracking loss condition occurs, the tracking object cannot be tracked continuously.
Disclosure of Invention
The invention provides a tracking method and device of a light detector, electronic equipment and a storage medium, which can realize tracking according to adjacent frames when a tracking object is shielded, and simultaneously reduce the consumption of system resources and reduce the requirement of a CPU.
In a first aspect, an embodiment of the present invention provides a tracking method for a light detector, where the method includes:
inputting a previous frame image of the light detector, a first central position of the light detector in the previous frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image; if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position;
and if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image.
Further, before storing the current feature, the second center position, and the actual offset into the light detector database, the method further includes:
extracting a matrix of a second layer network of the reciprocal of the video frame parameter model to be used as the current characteristic of the lamplight detector in the current frame image;
and storing the current characteristic, the second central position and the actual offset into a light detector database.
Further, after extracting a matrix of a layer-two network of the inverse of the video frame parameter model as a current feature of the light detector in the current frame image, the method further includes:
if the light detector in the first central position is blocked and the second central position is not empty, matching the current features with features in the light detector database;
and if the current features are matched with the features in the light detector database, determining that the object corresponding to the second center position in the current image frame is the light detector.
Further, if the current feature is matched with a feature in the light detector database, determining that the object corresponding to the second center position in the current image frame is the light detector includes:
if the current feature is matched with the feature in the light detector database, calculating the similarity between the current feature and the matched feature in the light detector database;
and if the similarity is higher than a preset similarity threshold, the object corresponding to the second center position in the current image frame is a light detector.
Further, after calculating the similarity between the current feature and the feature matched in the light detector database if the current feature is matched with the feature in the light detector database, the method further includes:
if the similarity is lower than the preset similarity threshold, no tracking target in the current image frame sets the second central point to be null.
Further, if the absolute value of the calculated offset is greater than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position, including:
determining a predicted third center position in the previous frame image according to the calculated offset and the second center position;
and determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the predicted third central position, the set threshold and the first central position.
Further, the video frame parameter model includes:
acquiring a historical video frame sequence of a light detector;
marking a target detection frame in each frame of image in the historical video frame sequence, a second central position of the light detector in the current frame of image and a calculated offset of the second central position of the light detector in the current frame of image relative to a first central position of the light detector in the previous frame of image;
and training a deep learning network according to the marked two adjacent frames of images in the historical video frame sequence, the target detection frame in each frame of image, the second central position of the light detector in the current frame of image and the calculated offset of the second central position of the light detector in the current frame of image relative to the first central position of the light detector in the previous frame of image until the deep learning network is converged to obtain the video parameter model.
In a second aspect, an embodiment of the present invention further provides a movement detection apparatus, where the apparatus includes:
the parameter determining module is used for inputting the first central position of the lamplight detector in the previous frame image, the first central position of the lamplight detector in the previous frame image and the current frame image of the lamplight detector into the video frame parameter model to obtain the second central position of the lamplight detector in the current frame image and the calculated offset of the lamplight detector in the current frame image relative to the lamplight detector in the previous frame image; the video frame parameter model is obtained by training the central position of the lamplight detector and the offset of the lamplight detector of the adjacent frame in the historical frame image of the lamplight detector by a deep learning network;
the offset determining module is used for determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position if the absolute value of the calculated offset is greater than zero;
and the state determination module is used for moving the light detector in the current frame image relative to the light detector in the previous frame image if the actual offset is greater than or equal to the distance threshold.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement any of the trace detection methods.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement any one of the tracking detection methods.
Inputting a last frame image of a light detector, a first central position of the light detector in the last frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the last frame image; if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second central position and the first central position; if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image, the problem that tracking is easy to lose by using a KCF algorithm is solved, once the tracking loss condition occurs, tracking of the tracked object cannot be continued is solved, tracking according to adjacent frames can be still performed when the tracked object is shielded, meanwhile, system resource consumption is reduced, and CPU requirements are lowered.
Drawings
Fig. 1 is a flowchart of a tracking method of a light detector according to a first embodiment of the present invention;
fig. 2 is a flowchart of a tracking method of a light detector according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a tracking device of a light detector according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It is to be further noted that, for the convenience of description, only a part of the structure relating to the present invention is shown in the drawings, not the whole structure.
Example one
Fig. 1 is a flowchart of a tracking method of a light detector according to an embodiment of the present invention, where the embodiment is applicable to light video detection in vehicle annual inspection, and the method may be executed by a tracking device of the light detector, where the tracking device may be implemented by software and/or hardware, and may specifically be inherited to an electronic device having storage and computing capabilities for tracking the light detector.
As shown in fig. 1, a tracking method of a light detector is provided, which specifically includes the following steps:
step 110, inputting a previous frame image of the light detector, a first central position of the light detector in the previous frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image;
in the embodiment of the invention, the last frame of image of the light detector can be understood as the image frame with the prior time sequence in the two adjacent frames of images in the video frame sequence of the light detector recorded by the light station in the vehicle annual inspection light detection process. The current frame image can be understood as an image frame which is recorded by a light station and has a time sequence after the adjacent image frame of the previous frame image in the video sequence of the light detector in the light detection process of the annual inspection of the vehicle. The first central position of the light detector in the previous frame of image may be understood as the position of the central point of each detection frame in the previous frame of image, where each frame of image in the complete video frame sequence of the light detector includes at least two light detectors at different positions and the light detector is always at the central point of the detection frame. The video frame parameter model can be understood as a parameter extraction model obtained by training a deep learning network according to historical video frames and parameter labels. The second center position of the light detector in the current frame image can be understood as the position of the center point of the detection frame in the current frame image, that is, the position of the center point of each detection frame in the current frame image extracted according to the video frame parameter model. The calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image can be understood as the calculated difference of the coordinates of the second center position in the current frame image and the coordinates of the first center position in the previous frame image, that is, the offset predicted by the video frame parameter model.
In the embodiment of the invention, before the light detection complete video frame sequence is input into the video frame parameter model, the first frame image in the complete video frame sequence is input into the video frame parameter model, and the detection frame containing the light detector and the central position of the detection frame in the first frame image are extracted. And taking the second frame image as a current frame image, taking the first frame image as a previous frame image, inputting the first frame image, the first central position of the first frame image and the second frame image into a video frame parameter model, and extracting the second central position of the light detector in the second frame image and the calculated offset of the light detector in the second frame image relative to the light detector in the first frame image by the video frame parameter model.
In the embodiment of the invention, in the complete video frame sequence for lamplight detection, two adjacent frames are respectively input into the video frame parameter model, the central positions of each lamplight detector and a detection frame corresponding to lamplight detection on each frame of image can be calculated, but the same lamplight detector corresponding to the first central point position of each lamplight detector and the second central point position of each lamplight detector cannot be associated.
Step 120, if the absolute value of the calculated offset is greater than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second central position and the first central position;
in the embodiment of the invention, the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image can be understood as the real deviation value of the second central position coordinate in the current frame image of the same light detector from the first central position coordinate in the previous frame image.
In the embodiment of the invention, the predicted first central position coordinate of the lamplight detector in the previous frame image is determined according to the second central position coordinate of any lamplight detector in the current frame image and the calculated offset of the lamplight detector in the current frame image relative to the lamplight detector in the previous frame image. And determining the actual first central point position coordinate of the light detector in the previous frame of image according to the predicted first central point position coordinate of the light detector in the previous frame of image and a set threshold. And determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the position of the second central point coordinate of the light detector and the actual first central point position coordinate of the light detector in the previous frame image.
In the embodiment of the invention, according to the second central point position coordinate of any one light detector in the current frame image and the calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image, the second central point position coordinate of the light detector on the same coordinate axis and the calculated offset coordinate of the light detector in the current frame image relative to the light detector in the previous frame image are correspondingly added. For example: if the coordinates of the second center point of the light detector are (x, y) and the calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image is (a, b), the predicted first center position coordinates of the light detector in the previous frame image are (x + a, y + b). And searching in the previous frame image by taking the predicted first central position coordinate of the light detector in the previous frame image as a center and a preset threshold as a radius according to the predicted first central position coordinate of the light detector in the previous frame image and the set threshold, and if the first central position coordinate of any light detector is searched as the actual first central position coordinate of the light detector in the previous frame image, considering the light detector searched in the previous frame image and the light detector in the current frame image as the same light detector. And correspondingly subtracting the second central point position coordinate of the light detector from the same coordinate axis of the actual first central point position coordinate in the previous frame of image to obtain the actual offset of the light detector in the current frame of image relative to the light detector in the previous frame of image.
Step 130, if the actual offset is greater than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image.
In the embodiment of the present invention, the distance threshold may be a boundary value that is used to determine a minimum movement value of the movement of the light detector in the sequence of light detection video frames according to experience, and is used to determine whether the light detector moves.
In the embodiment of the invention, the actual offset distance of the light detector relative to the light detector in the previous frame image is determined according to the actual offset of the light detector relative to the light detector in the previous frame image. If the actual offset distance of the light detector relative to the light detector in the previous frame of image is greater than or equal to the distance threshold, it indicates that the light detector in the current frame of image moves relative to the light detector in the previous frame of image. And if the actual offset distance of the light detector relative to the light detector in the previous frame of image is smaller than the distance threshold, the light detector in the current frame of image does not move relative to the light detector in the previous frame of image.
Further, after the previous frame image of the light detector, the first center position of the light detector in the previous frame image, and the current frame image are input into the video frame parameter model, the method further includes:
extracting a matrix of a second layer network of the reciprocal of the video frame parameter model to be used as the current characteristic of the lamplight detector in the current frame image;
and storing the current characteristic, the second central position and the actual offset into a light detector database.
In the embodiment of the invention, the matrix of the penultimate layer network can be understood as that one image matrix is input into a video frame parameter model and is subjected to convolution operation for N-1 times to obtain another matrix. The current characteristic of the light detector in the current frame image can be understood as inputting the current frame image into a video frame parameter model to perform convolution operation for N-1 times to obtain a preset number of characteristic values. Wherein, N is the total times of the convolution operation in the video frame parameter model. The light detector database may be understood as a database in which parameters corresponding to each video frame in the current video frame sequence are stored in association.
In the embodiment of the invention, the matrix of the last but one layer network of the video frame parameter model is extracted as the current characteristic value of the light detector in the current frame image and is used as the basis for judging each light detector. A plurality of factors such as the distribution position, the lamplight, the surrounding characteristics, the pixel condition and the like of each lamplight detector in the image layout of the current frame are extracted from a penultimate layer network in the parameter model of the video frame, and the factors can also be considered as the attribute characteristics of each lamplight detector in the image. And storing the current characteristics, the second central position and the actual offset into a lamplight detector database for searching parameters of each video frame image or searching video frame images corresponding to each parameter.
Further, before storing the current feature, the second center position, and the actual offset into the light detector database, the method further includes:
if the light detector in the first central position is shielded and the light detector in the second central position is not shielded, matching the current features with features in the light detector database;
and if the current features are matched with the features in the light detector database, determining that the object corresponding to the second center position in the current image frame is the light detector.
In the embodiment of the present invention, it can be understood that when the light detector in the first central position is blocked, the video frame parameter model cannot confirm the position of the second central point of the light detector in the previous frame image in the current frame image according to the previous frame image, that is, the light detector in the first central position of the previous frame image, and further cannot extract the feature corresponding to the light detector in the first central position. The light detector at the second center position is not blocked, and the video frame parameter model can be understood as a video frame parameter model, which can determine the current feature and the second center position in the current frame image according to the previous frame image of the light detector, the first center position of the light detector in the previous frame image and the current frame image, but cannot determine the calculated offset of the current frame image relative to the previous frame image according to the first center position, the current feature and the second center position in the previous frame image.
In the embodiment of the invention, when one frame of image or a plurality of frames of images of each frame of image in the lamplight detection video frame sequence are shielded, and the lamplight detector appears in the current frame of image, the movement condition of the lamplight detector cannot be determined according to the previous frame of image, the first central position of the previous frame of image and the input parameters of the current frame of image into the video parameter model, and the characteristic matching needs to be carried out in the lamplight detector database. And comparing the current characteristic corresponding to the current frame image with the reverse sequence of the time sequence stored in the lamplight detector database once, and if the current characteristic is matched with the characteristic corresponding to any frame image in the lamplight detector database, determining that the object corresponding to the second center position in the current image frame is the lamplight detector.
Further, if the current feature is matched with a feature in the light detector database, determining that the object corresponding to the second center position in the current image frame is the light detector includes:
if the current feature is matched with the feature in the light detector database, calculating the similarity between the current feature and the matched feature in the light detector database;
and if the similarity is higher than a preset similarity threshold, the object corresponding to the second center position in the current image frame is a light detector.
In the embodiment of the present invention, the similarity of the matching features in the light detector database may be understood as a similarity value calculated according to the current feature and the matching feature corresponding to any frame of image in the light detector database. The preset similarity threshold may be understood as a similarity threshold having a maximum probability between calculating the similarity value and actual experience. Calculating the similarity value similarity of the matched features in the light detector database based on the following formula:
Figure BDA0002870724670000111
a is a feature corresponding to a first central point coordinate in a frame of image on a light detector; b is a feature corresponding to a second central point coordinate in a current frame image of the light detector; n is a preset number characteristic value.
In the embodiment of the invention, if the current characteristic is matched with the characteristic in the lamplight detector database, partial information matching may occur, similarity calculation needs to be carried out on the current characteristic and the matching characteristic in the lamplight detector database, and the current characteristic and the matching characteristic in the lamplight detector database are further verified. And if the similarity is higher than a preset similarity threshold, the light detector corresponding to the current characteristic and the light detector corresponding to the matched characteristic in the light detector database are the same light detector. And further inputting a video frame parameter model according to the image frame corresponding to the matched feature, the current frame image corresponding to the current feature and the corresponding parameter value for parameter calculation, tracking the movement condition of the light detector in the shielding process, and repeating the steps 110 to 130.
Further, after calculating the similarity between the current feature and the feature matched in the light detector database if the current feature is matched with the feature in the light detector database, the method further includes:
if the similarity is lower than the preset similarity threshold, no tracking target in the current image frame sets the second central point to be null.
In the embodiment of the invention, if the similarity is lower than the preset similarity threshold, the light detector corresponding to the current characteristic and the light detector corresponding to the matching characteristic in the light detector database are different light detectors. And continuously searching the features in the lamplight detector database, and if the features in the lamplight detector database are not matched after the characteristics are searched, setting the second central point to be empty, and leaving no lamplight detector needing to be tracked in the current frame image.
Inputting a last frame image of a light detector, a first central position of the light detector in the last frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the last frame image; if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second central position and the first central position; if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image, the problem that tracking is easy to lose by using a KCF algorithm is solved, once the tracking loss condition occurs, tracking of the tracked object cannot be continued is solved, tracking according to adjacent frames can be still performed when the tracked object is shielded, meanwhile, system resource consumption is reduced, and CPU requirements are lowered.
Example two
Fig. 2 is a flowchart of a tracking method of a light detector according to a second embodiment of the present invention. The technical scheme of the embodiment of the invention is further refined on the basis of the technical scheme, and particularly mainly comprises the following steps:
step 210, inputting a previous frame image of the light detector, a first central position of the light detector in the previous frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image;
step 220, if the absolute value of the calculated offset is greater than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second central position and the first central position;
step 230, determining a predicted third center position in the previous frame of image according to the calculated offset and the second center position; and determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the predicted third center position, the set threshold and the first center position.
In the embodiment of the present invention, the setting of the threshold may be understood as finding the distance value of the actual first center point position according to the maximum probability determined by the actual empirical value. The step of predicting the third center position in the previous frame image may be understood as the step of predicting the second center position of the current frame image in the previous frame image by the video frame parameter model according to the previous frame image of the light detector, the first center position of the light detector in the previous frame image, and the current frame image.
In the embodiment of the invention, the previous frame image of the lamplight detector, the first central position of the lamplight detector in the previous frame image and the current frame image are input into the video parameter frame model to obtain the second central point position of the current frame image, the current characteristic value and the calculated offset of the lamplight detector in the current frame image relative to the lamplight detector in the previous frame image. And calculating the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second central point position of the current frame image, the current characteristic value and the calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image.
Step 240, if the actual offset is greater than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image.
Further, the video frame parameter model includes:
acquiring a historical video frame sequence of a light detector;
marking a target detection frame in each frame of image in the historical video frame sequence, a second central position of the light detector in the current frame of image and a calculated offset of the second central position of the light detector in the current frame of image relative to a first central position of the light detector in the previous frame of image;
and training a deep learning network according to the marked two adjacent frames of images in the historical video frame sequence, the target detection frame in each frame of image, the second central position of the light detector in the current frame of image and the calculated offset of the second central position of the light detector in the current frame of image relative to the first central position of the light detector in the previous frame of image until the deep learning network is converged to obtain the video parameter model.
In the embodiment of the present invention, the historical video frame sequence may be understood as a video frame sequence that collects light stations of different shooting angles, different illumination intensities, and different monitoring stations. The deep learning network can be understood as a neural network to be trained, and is used for learning and training the mapping relation between data.
In the embodiment of the invention, the depth learning network is trained after the target detection frame in each image in the historical video frame sequence, the second central position of the light detector in the current frame image and the calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image are marked according to the historical video frame sequence, and the L calculated by using the formula (2)offApproaching 0. Based on the following formulaConvergence level L of offset trainingoff
Figure RE-GDA0003033376870000151
Wherein N is the number of the lamplight detectors detected by the current frame image,
Figure BDA0002870724670000152
the offset of each light detector in the current frame image predicted by the network relative to the corresponding light detector in the previous frame image,
Figure BDA0002870724670000153
and
Figure BDA0002870724670000154
the positions corresponding to the previous frame image and the current frame image respectively.
Inputting a last frame image of a light detector, a first central position of the light detector in the last frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the last frame image; if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second central position and the first central position; if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image, the problem that tracking is easy to lose by using a KCF algorithm is solved, once the tracking loss condition occurs, tracking of the tracked object cannot be continued is solved, tracking according to adjacent frames can be still performed when the tracked object is shielded, meanwhile, system resource consumption is reduced, and CPU requirements are lowered.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a tracking device of a light detector according to a third embodiment of the present invention. The device includes: a parameter determination module 310, an offset determination module 320, and a state determination module 330.
The parameter determining module is used for inputting the first central position of the lamplight detector in the previous frame image, the first central position of the lamplight detector in the previous frame image and the current frame image of the lamplight detector into the video frame parameter model to obtain the second central position of the lamplight detector in the current frame image and the calculated offset of the lamplight detector in the current frame image relative to the lamplight detector in the previous frame image; the video frame parameter model is obtained by training the central position of the lamplight detector and the offset of the lamplight detector of the adjacent frame in the historical frame image of the lamplight detector by a deep learning network;
the offset determining module is used for determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position if the absolute value of the calculated offset is greater than zero;
and the state determination module is used for moving the light detector in the current frame image relative to the light detector in the previous frame image if the actual offset is greater than or equal to the distance threshold.
Further, the parameter determining module 310 is further specifically configured to:
extracting a matrix of a second layer network of the reciprocal of the video frame parameter model to be used as the current characteristic of the lamplight detector in the current frame image;
and storing the current characteristic, the second central position and the actual offset into a light detector database.
Further, the parameter determining module is specifically further configured to:
if the light detector in the first central position is shielded and the light detector in the second central position is not shielded, matching the current features with features in the light detector database;
and if the current features are matched with the features in the light detector database, determining that the object corresponding to the second center position in the current image frame is the light detector.
Further, the parameter determining module 310 is further specifically configured to:
if the current feature is matched with the feature in the light detector database, calculating the similarity between the current feature and the matched feature in the light detector database;
and if the similarity is higher than a preset similarity threshold, the object corresponding to the second center position in the current image frame is a light detector.
Further, the parameter determination module 310 is further configured to:
if the similarity is lower than the preset similarity threshold, no tracking target in the current image frame sets the second central point to be null.
Further, the offset determining module 320 is specifically configured to:
determining a predicted third center position in the previous frame image according to the calculated offset and the second center position;
and determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the predicted third central position, the set threshold and the first central position.
Further, the offset determining module 320 is specifically configured to:
acquiring a historical video frame sequence of a light detector;
marking a target detection frame in each frame of image in the historical video frame sequence, a second central position of the light detector in the current frame of image and a calculated offset of the light detector in the current frame of image relative to the light detector in the previous frame of image; calculating the offset of the same tracking target in two adjacent frames of images;
and training a deep learning network according to the marked two adjacent frames of images in the historical video frame sequence, the target detection frame in each frame of image, the second central position of the light detector in the current frame of image and the calculated offset of the light detector in the current frame of image relative to the light detector in the previous frame of image until the deep learning network is converged to obtain the video parameter model.
The tracking device of the lamplight detector provided by the embodiment of the invention can execute the tracking method of the lamplight detector provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to embodiment 4 of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 4 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 4, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be appreciated that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement the tracking method of the light detector provided by the embodiment of the present invention, the method includes:
inputting a previous frame image of the light detector, a first central position of the light detector in the previous frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image;
if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position;
and if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements any one of the tracking methods for a light detector, and the method includes:
inputting a previous frame image of the light detector, a first central position of the light detector in the previous frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image;
if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position;
and if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions without departing from the scope of the invention. Therefore, although the present invention has been described in more detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A tracking method of a light detector is characterized by comprising the following steps:
inputting a previous frame image of the light detector, a first central position of the light detector in the previous frame image and a current frame image into a video frame parameter model to obtain a second central position of the light detector in the current frame image and a calculated offset of the light detector in the current frame image relative to the light detector in the previous frame image;
if the absolute value of the calculated offset is larger than zero, determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second center position and the first center position;
and if the actual offset is larger than or equal to the distance threshold, the light detector in the current frame image moves relative to the light detector in the previous frame image.
2. The method of claim 1, wherein after inputting the previous frame image of the light detector, the first center position of the light detector in the previous frame image, and the current frame image into the video frame parameter model, further comprising:
extracting a matrix of a second layer network of the reciprocal of the video frame parameter model to be used as the current characteristic of the lamplight detector in the current frame image;
and storing the current characteristic, the second central position and the actual offset into a light detector database.
3. The method of claim 2, wherein prior to storing the current signature, the second center position, and the actual offset in a light detector database, further comprising:
if the light detector in the first central position is shielded and the light detector in the second central position is not shielded, matching the current features with the features in the light detector database;
and if the current features are matched with the features in the light detector database, determining that the object corresponding to the second center position in the current image frame is the light detector.
4. The method of claim 3, wherein determining that the object corresponding to the second center position in the current image frame is a light detector if the current feature matches a feature in the light detector database comprises:
if the current feature is matched with the feature in the light detector database, calculating the similarity between the current feature and the matched feature in the light detector database;
and if the similarity is higher than a preset similarity threshold value, the object corresponding to the second center position is subjected to light detection.
5. The method of claim 4, wherein after calculating the similarity of the current signature to the matched signature in the light detector database if the current signature matches the signature in the light detector database, further comprising:
if the similarity is lower than the preset similarity threshold, the second central point is shielded by the light detector when no tracking target is positioned in the current image frame.
6. The method of claim 1, wherein determining an actual offset of the light detector in the current frame image relative to the light detector in the previous frame image if the absolute value of the calculated offset is greater than zero based on the second center position and the first center position comprises:
determining a predicted third central position in the previous frame of image according to the calculated offset and the second central position;
and determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the predicted third center position, the set threshold and the first center position.
7. The method of claim 1, wherein the video frame parameter model comprises:
acquiring a historical video frame sequence of a light detector;
marking the calculated offset of the target detection frame in each frame image in the historical video frame sequence, the second central position of the light detector in the current frame image and the second central position of the light detector in the current frame image relative to the first central position of the light detector in the previous frame image;
and training a deep learning network according to the marked two adjacent frames of images in the historical video frame sequence, the target detection frame in each frame of image, the second central position of the light detector in the current frame of image and the calculated offset of the second central position of the light detector in the current frame of image relative to the first central position of the light detector in the previous frame of image until the deep learning network is converged to obtain the video parameter model.
8. A tracking device of a light detector is characterized by comprising:
the parameter determining module is used for inputting a last frame image of the lamplight detector, a first central position of the lamplight detector in the last frame image and a current frame image into the video frame parameter model to obtain a second central position of the lamplight detector in the current frame image and a calculated offset of the lamplight detector in the current frame image relative to the lamplight detector in the last frame image; the video frame parameter model is obtained by training the central position of the lamplight detector and the offset of the lamplight detector of the adjacent frame in the historical frame image of the lamplight detector by a deep learning network;
the offset determining module is used for determining the actual offset of the light detector in the current frame image relative to the light detector in the previous frame image according to the second central position and the first central position if the absolute value of the calculated offset is greater than zero;
and the state determination module is used for moving the light detector in the current frame image relative to the light detector in the previous frame image if the actual offset is greater than or equal to the distance threshold.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of tracking a light detector according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a tracking method of a light detector according to any one of claims 1 to 7.
CN202011599041.9A 2020-12-29 2020-12-29 Tracking method and device of lamplight detector, electronic equipment and storage medium Pending CN112857746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011599041.9A CN112857746A (en) 2020-12-29 2020-12-29 Tracking method and device of lamplight detector, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011599041.9A CN112857746A (en) 2020-12-29 2020-12-29 Tracking method and device of lamplight detector, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112857746A true CN112857746A (en) 2021-05-28

Family

ID=75998309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011599041.9A Pending CN112857746A (en) 2020-12-29 2020-12-29 Tracking method and device of lamplight detector, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112857746A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738668A (en) * 2019-09-29 2020-01-31 南京佑驾科技有限公司 method and system for intelligently controlling high beam and vehicle
CN114002706A (en) * 2021-10-29 2022-02-01 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Measuring method and device of photoelectric sight-stabilizing measuring system and computer equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080007620A1 (en) * 2006-07-06 2008-01-10 Nokia Corporation Method, Device, Mobile Terminal and Computer Program Product for a Camera Motion Detection Based Scheme for Improving Camera Input User Interface Functionalities
US20080075337A1 (en) * 2006-09-22 2008-03-27 Fujifilm Corporation Face image detecting apparatus and method of controlling same
CN107808392A (en) * 2017-10-31 2018-03-16 中科信达(福建)科技发展有限公司 The automatic method for tracking and positioning of safety check vehicle and system of open scene
CN108875480A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 A kind of method for tracing of face characteristic information, apparatus and system
CN109635657A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Method for tracking target, device, equipment and storage medium
CN110738687A (en) * 2019-10-18 2020-01-31 上海眼控科技股份有限公司 Object tracking method, device, equipment and storage medium
CN110796093A (en) * 2019-10-30 2020-02-14 上海眼控科技股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111238829A (en) * 2020-02-12 2020-06-05 上海眼控科技股份有限公司 Method and device for determining moving state, computer equipment and storage medium
CN111624634A (en) * 2020-05-11 2020-09-04 中国科学院深圳先进技术研究院 Satellite positioning error evaluation method and system based on deep convolutional neural network
CN111666922A (en) * 2020-07-02 2020-09-15 上海眼控科技股份有限公司 Video matching method and device, computer equipment and storage medium
CN112085767A (en) * 2020-08-28 2020-12-15 安徽清新互联信息科技有限公司 Passenger flow statistical method and system based on deep optical flow tracking

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080007620A1 (en) * 2006-07-06 2008-01-10 Nokia Corporation Method, Device, Mobile Terminal and Computer Program Product for a Camera Motion Detection Based Scheme for Improving Camera Input User Interface Functionalities
US20080075337A1 (en) * 2006-09-22 2008-03-27 Fujifilm Corporation Face image detecting apparatus and method of controlling same
CN108875480A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 A kind of method for tracing of face characteristic information, apparatus and system
CN107808392A (en) * 2017-10-31 2018-03-16 中科信达(福建)科技发展有限公司 The automatic method for tracking and positioning of safety check vehicle and system of open scene
CN109635657A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Method for tracking target, device, equipment and storage medium
CN110738687A (en) * 2019-10-18 2020-01-31 上海眼控科技股份有限公司 Object tracking method, device, equipment and storage medium
CN110796093A (en) * 2019-10-30 2020-02-14 上海眼控科技股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111238829A (en) * 2020-02-12 2020-06-05 上海眼控科技股份有限公司 Method and device for determining moving state, computer equipment and storage medium
CN111624634A (en) * 2020-05-11 2020-09-04 中国科学院深圳先进技术研究院 Satellite positioning error evaluation method and system based on deep convolutional neural network
CN111666922A (en) * 2020-07-02 2020-09-15 上海眼控科技股份有限公司 Video matching method and device, computer equipment and storage medium
CN112085767A (en) * 2020-08-28 2020-12-15 安徽清新互联信息科技有限公司 Passenger flow statistical method and system based on deep optical flow tracking

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738668A (en) * 2019-09-29 2020-01-31 南京佑驾科技有限公司 method and system for intelligently controlling high beam and vehicle
CN114002706A (en) * 2021-10-29 2022-02-01 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Measuring method and device of photoelectric sight-stabilizing measuring system and computer equipment

Similar Documents

Publication Publication Date Title
CN109214238B (en) Multi-target tracking method, device, equipment and storage medium
CN107832662B (en) Method and system for acquiring image annotation data
CN110033018B (en) Graph similarity judging method and device and computer readable storage medium
WO2023010758A1 (en) Action detection method and apparatus, and terminal device and storage medium
CN110263713B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN109918513B (en) Image processing method, device, server and storage medium
CN111680678B (en) Target area identification method, device, equipment and readable storage medium
CN111027605A (en) Fine-grained image recognition method and device based on deep learning
US11915500B2 (en) Neural network based scene text recognition
CN111444807B (en) Target detection method, device, electronic equipment and computer readable medium
CN110188766B (en) Image main target detection method and device based on convolutional neural network
CN112857746A (en) Tracking method and device of lamplight detector, electronic equipment and storage medium
CN114359383A (en) Image positioning method, device, equipment and storage medium
CN110688873A (en) Multi-target tracking method and face recognition method
CN114627339B (en) Intelligent recognition tracking method and storage medium for cross border personnel in dense jungle area
CN114565780A (en) Target identification method and device, electronic equipment and storage medium
CN110555352A (en) interest point identification method, device, server and storage medium
CN112215271A (en) Anti-occlusion target detection method and device based on multi-head attention mechanism
CN111401229A (en) Visual small target automatic labeling method and device and electronic equipment
CN111368915A (en) Drawing verification method, device, equipment and storage medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
WO2023273334A1 (en) Behavior recognition method and apparatus, and electronic device, computer-readable storage medium, computer program and computer program product
CN111819567A (en) Method and apparatus for matching images using semantic features
US11481881B2 (en) Adaptive video subsampling for energy efficient object detection
CN110059180B (en) Article author identity recognition and evaluation model training method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240301