CN117553756B - Off-target amount calculating method, device, equipment and storage medium based on target tracking - Google Patents

Off-target amount calculating method, device, equipment and storage medium based on target tracking Download PDF

Info

Publication number
CN117553756B
CN117553756B CN202410037752.9A CN202410037752A CN117553756B CN 117553756 B CN117553756 B CN 117553756B CN 202410037752 A CN202410037752 A CN 202410037752A CN 117553756 B CN117553756 B CN 117553756B
Authority
CN
China
Prior art keywords
target
frame image
key frame
points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410037752.9A
Other languages
Chinese (zh)
Other versions
CN117553756A (en
Inventor
刘敏
常庆禹
管乃洋
凡遵林
王之元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese People's Liberation Army 32806 Unit
Original Assignee
Chinese People's Liberation Army 32806 Unit
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese People's Liberation Army 32806 Unit filed Critical Chinese People's Liberation Army 32806 Unit
Priority to CN202410037752.9A priority Critical patent/CN117553756B/en
Publication of CN117553756A publication Critical patent/CN117553756A/en
Application granted granted Critical
Publication of CN117553756B publication Critical patent/CN117553756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F42AMMUNITION; BLASTING
    • F42BEXPLOSIVE CHARGES, e.g. FOR BLASTING, FIREWORKS, AMMUNITION
    • F42B35/00Testing or checking of ammunition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a target tracking-based off-target calculation method, apparatus, device, and storage medium, the method comprising: identifying a key frame image in the image sequence; determining a target drop point based on a target position in the key frame image; target mark point identification is carried out on the target drop points in the key frame image, so that mark points are obtained; and calculating according to the pixel coordinates of the mark points and the world coordinates of the target landing points to obtain the off-target quantity. According to the scheme, the moving target is detected and tracked, so that the key frame images can be automatically detected and identified, and manual intervention is not needed; and automatically recognizing the dead center target mark point in the key frame image, and further obtaining the off-target quantity according to the pixel coordinates of the mark point and the world coordinates of the drop point. According to the scheme, the key frame image can be automatically detected and identified, the off-target quantity is automatically calculated according to the pixel coordinates of the target landing points, the pixel coordinates of the mark points and the calibration information, no manual intervention is needed in the process, the calculation time of the off-target quantity is shortened, and the target reporting efficiency is improved.

Description

Off-target amount calculating method, device, equipment and storage medium based on target tracking
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a target tracking-based off-target calculation method, apparatus, device, and storage medium.
Background
In the process of missile flight test, the off-target quantity is the minimum relative distance between the missile and the target, and is used for indicating how large the error of the missile can hit the target, namely the missile hit precision, so that the off-target quantity is often required to be calculated in a range project, wherein the off-target quantity is one of key parameters for checking the performance of the missile: the target landing point is measured first, and then the target off-target amount is calculated based on the target landing point. It can be seen that the measurement of the landing point of the target is a key stage, and the most important thing in the process of measuring the landing point of the target is to accurately find the key frame image of the target landing moment.
The traditional method for calculating the miss distance is generally as follows: firstly, detecting target explosion; then, in the images of the previous period of time sequence of the explosion, determining a key frame image of the target targeting moment in a manual selection mode; and then, manually selecting mark points, and further calculating the off-target quantity.
In the traditional method, key frame images and mark points in the images are selected manually, so that key frames are searched by a manual intervention method, and a method of manually clicking landing points is performed, so that manual watching is required, labor is consumed, the calculation efficiency of off-target quantity is low, and rapid target reporting cannot be performed.
The above drawbacks are to be overcome by those skilled in the art.
Disclosure of Invention
First, the technical problem to be solved
In order to solve the problems in the prior art, the present disclosure provides a method, a device, equipment and a storage medium for calculating off-target based on target tracking, which solve the problem of low efficiency of manually calculating off-target in the prior art.
(II) technical scheme
In order to achieve the above purpose, the main technical scheme adopted in the present disclosure includes:
in a first aspect, the present disclosure provides a target tracking-based off-target calculation method, including:
identifying a key frame image in the image sequence;
determining a target drop point based on a target position in the key frame image;
target mark point identification is carried out on the target drop points in the key frame image, so that mark points are obtained;
and calculating according to the pixel coordinates of the mark points and the world coordinates of the target landing points to obtain the off-target quantity.
In an exemplary embodiment of the present disclosure, the detecting the identification key frame image includes:
detecting a moving target in the image sequence, and determining the moving target;
tracking the moving target;
and when the target tracking result in the image sequence identifies that the target is missing, determining a previous frame image of the target missing in the image sequence as a key frame image.
In an exemplary embodiment of the present disclosure, when performing moving object detection in an image sequence, performing moving object detection on a first frame image in the image sequence using a hybrid gaussian model includes:
performing background training based on the image sequence, and modeling according to the complete background graph to obtain a background model;
determining a foreground by adopting a mixed Gaussian model method according to the first frame image and the background model;
the foreground is identified as a moving object.
In one exemplary embodiment of the present disclosure, a SiamRPN network is used for real-time target tracking of a moving target, wherein the SiamRPN network comprises a Siamese network for extracting target characteristic information; and an RPN network for classification and regression.
In one exemplary embodiment of the present disclosure, real-time object tracking of moving objects using a SiamRPN network includes:
calibrating the position of the moving target in the first frame image in the image sequence;
extracting target characteristic information from first frame image by using Siamese network
Training to obtain a regression model of the RPN based on a plurality of samples around the moving target;
cutting a high probability area of a target in a current frame image based on an image sequence to obtain a cut image, wherein the high probability area is a surrounding area of a moving target appearance position in a previous frame image;
extraction of region characteristic information for cropped image by Siamese network
Targeting feature information using RPN networkAnd regional characteristic information->Performing classification and regression operations to obtain the frame position information of each anchor point and the corresponding score;
selecting an anchor point with the highest score from the scores of the anchor points as a predicted target position, and outputting the predicted target position as network prediction information;
judging the confidence coefficient of the network prediction information of the current frame image, and if the confidence coefficient is larger than a preset threshold value, updating the regional characteristic information
Repeating the steps until the image sequence is completely read.
In an exemplary embodiment of the present disclosure, performing target landmark identification on a target landing point in a key frame image, obtaining a landmark includes:
finding a target area in the key frame image through color and cross shape information;
searching salient points in a target area, clustering the salient points, and determining four endpoints of the target;
judging whether the four endpoints can be fitted into a parallelogram, if so, the target is a cross target, and identifying the four endpoints as four vertices of the cross target.
In an exemplary embodiment of the present disclosure, further comprising:
calibrating world coordinates of four vertexes in a cross wire target in advance;
according to the pixel coordinates of four vertexes in the cross target in the key frame image;
obtaining a homography transformation matrix according to world coordinates and pixel coordinates of the four vertexes;
determining world coordinates of the target falling points according to the pixel coordinates of the target falling points and the homography transformation matrix;
and determining the off-target amount according to the world coordinates of the target landing point.
In a second aspect, the present disclosure further provides a target tracking-based off-target calculation apparatus, including:
a key frame identification module for identifying key frame images in the image sequence;
the drop point determining module is used for determining a target drop point based on a target position in the key frame image;
the mark point determining module is used for identifying a target mark point of a target falling point in the key frame image to obtain a mark point;
the off-target quantity determining module is used for calculating according to the pixel coordinates of the mark points and the world coordinates of the target landing points to obtain the off-target quantity.
In a third aspect, the present disclosure further provides an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods described above.
In a fourth aspect, the present disclosure also provides a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method described above.
(III) beneficial effects
The embodiment of the disclosure has the beneficial effects that the target-off amount calculating method, the device, the equipment and the storage medium based on target tracking are provided, and the key frame images can be automatically detected and identified by detecting and tracking the moving target without manual intervention; and automatically recognizing the dead center target mark point in the key frame image, and further obtaining the off-target quantity according to the pixel coordinates of the mark point and the world coordinates of the drop point. According to the scheme, the key frame image can be automatically detected and identified, the off-target quantity is automatically calculated according to the pixel coordinates of the target landing points, the pixel coordinates of the mark points and the calibration information, no human intervention is needed in the process, no human intervention is needed, the calculation time of the off-target quantity is shortened, and the target reporting efficiency is improved.
Drawings
FIG. 1 is a flow chart of steps of a target tracking-based off-target calculation method provided in an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating the step S110 in FIG. 1 according to the present embodiment;
FIG. 3 is a flowchart illustrating the step S112 in FIG. 2 according to the present embodiment;
FIG. 4 is a flowchart illustrating the step S130 in FIG. 1 according to the present embodiment;
FIG. 5 is a flowchart illustrating the step S140 of FIG. 1 according to the present embodiment;
FIG. 6 is a flow chart of a target tracking-based off-target calculation method provided by an embodiment of the present disclosure;
fig. 7 is a schematic flow chart of the sialrpn real-time tracking provided in the embodiment of the present disclosure;
FIG. 8 is a schematic diagram of cross hair vertex identification for object detection in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of the composition of a target tracking-based off-target amount calculation device provided in another embodiment of the present disclosure;
fig. 10 is a schematic diagram of an internal structure of a computer system of an electronic device according to still another embodiment of the present invention.
Detailed Description
For a better explanation of the present disclosure, for ease of understanding, the present disclosure is described in detail below by way of specific embodiments in conjunction with the accompanying drawings.
All technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
In the missile test process, when a missile approaches a target, the missile deviates from the set target due to factors such as errors of a guidance system, inertia of the missile, maneuvering of the target, external interference and the like, and a certain off-target amount is generated, wherein the off-target amount of the missile or the off-target amount of other moving objects mainly refers to the deviation of the height and the lateral direction between the missile and the target when the missile passes the target.
In the related embodiment of the disclosure, the off-target calculation method needs manual intervention, and a key frame is manually searched in a section of sequence image, so that the key frame cannot be automatically searched. In addition, the mark points are manually selected to calculate the off-target quantity, the mark points cannot be automatically identified, the whole process is low in efficiency and more in time consumption, and sometimes the real-time requirement of the project cannot be met.
In view of the above drawbacks, the present disclosure provides a target tracking-based off-target calculation method, apparatus, device, and storage medium.
Fig. 1 is a flowchart of steps of a target tracking-based off-target calculation method according to an embodiment of the present disclosure, as shown in fig. 1, including the steps of:
step S110, detecting and identifying the key frame image.
Step S120, determining a target drop point based on the target position in the key frame image.
And step S130, identifying target mark points at target drop points in the key frame image to obtain the mark points.
And step 140, calculating according to the pixel coordinates of the mark points and the world coordinates of the target landing points to obtain the off-target quantity.
Based on the steps S110-S140, the moving target is detected and tracked, so that the key frame images can be automatically detected and identified without manual intervention; and automatically recognizing the dead center target mark point in the key frame image, and further obtaining the off-target quantity according to the pixel coordinates of the mark point and the world coordinates of the drop point. According to the scheme, the key frame image can be automatically detected and identified, the off-target quantity is automatically calculated according to the pixel coordinates of the target landing points, the pixel coordinates of the mark points and the calibration information, no human intervention is needed in the process, no human intervention is needed, the calculation time of the off-target quantity is shortened, and the target reporting efficiency is improved.
The method mainly solves the problems that on one hand, through moving target detection and target tracking, a key frame image is automatically detected and identified, and the pixel coordinates of a target landing point are obtained; on the other hand, the off-target amount is automatically calculated according to the pixel coordinates of the target landing point, the pixel coordinates of the automatically identified mark point and the calibration information. Specific steps of the off-target calculation method based on target tracking are described below in connection with the embodiments.
As shown in fig. 1, in step S110, an identification key frame image is detected.
In an exemplary embodiment of the present disclosure, fig. 2 is a flowchart illustrating a step of step S110 in fig. 1 in the present embodiment, and as shown in fig. 2, detecting and identifying a key frame image specifically includes:
step S111, detecting a moving target in an image sequence to determine the moving target;
step S112, carrying out target tracking on the moving target;
and step 113, when the target tracking result in the image sequence identifies that the target is missing, determining a previous frame image of the target missing in the image sequence as a key frame image.
In an exemplary embodiment of the present disclosure, when performing moving object detection in an image sequence in step S111, performing moving object detection on a first frame image in the image sequence using a hybrid gaussian model includes:
firstly, performing background training based on an image sequence, and modeling according to a complete background image to obtain a background model, wherein the image sequence is originally the image sequence or the image sequence extracted from a video.
And secondly, determining the foreground by adopting a mixed Gaussian model method according to the first frame image and the background model.
And finally, recognizing the foreground as a moving target, and finishing the detection of the moving target.
In the step, the foreground of the motion state is detected mainly based on the first frame image, the foreground is taken as a motion target, and the motion target is separated from a relatively static background image, so that the detection and the identification of the motion target are realized. The Xu Xiang sequence is subjected to long-time background adaptation training through the Gaussian mixture model, a complete background image is extracted from the image, and then the background model is formed through modeling, so that when a target moves through the background model, the moving target can be identified by a foreground.
In an exemplary embodiment of the present disclosure, in step S112, a SiamRPN network is used to track a moving target in real time, where the SiamRPN network includes a Siamese network and an RPN network, the Siamese network is used to extract Siamese network (i.e. twin network) of target feature information, and is mainly used to extract image features of a template frame and a detection frame, so as to obtain a feature map; the RPN network optimizes the characteristic map for classification and regression, wherein the classification refers to distinguishing a target from a background in a key frame image, and the regression refers to acquiring more accurate target size and positioning.
In an exemplary embodiment of the present disclosure, fig. 3 is a flowchart illustrating a step of step S112 in fig. 2 in the present embodiment, and as shown in fig. 3, performing, in real time, object tracking on a moving object using a sialprn network in step S112 includes:
step S1121, calibrating the position of a moving object in a first frame image in an image sequence;
step S1122, extracting target feature information from the first frame image by using Siamese network
Step S1123, training to obtain a regression model of the RPN based on a plurality of samples around the moving target;
step S1124, clipping a high probability area of a target in a current frame image based on an image sequence to obtain a clipping image, wherein the high probability area is a surrounding area of a moving target appearance position in a previous frame image;
step S1125, extracting region characteristic information of the cropped image by using Siamese network
Step S1126, utilizing RPN network to target characteristic informationAnd regional characteristic information->Performing classification and regression operations to obtain the frame position information of each anchor point and the corresponding score;
step S1127, selecting the anchor point with the highest score from the scores of the anchor points as a predicted target position, and outputting the predicted target position as network prediction information;
step S1128, judging the confidence coefficient of the network prediction information of the current frame image, and if the confidence coefficient is larger than the preset threshold value, updating the region characteristic information
Step S1129, repeating the above steps S1121 to S1128 until the complete reading of the image sequence.
As shown in fig. 1, in step S120, a target landing point is determined based on a target position in the key frame image.
In an exemplary embodiment of the present disclosure, in step S120, on the basis that the previous frame image in which the target is missing is obtained as the key frame image, the pixel coordinates of the target in the previous frame are obtained as the target landing point, that is, the pixel coordinates where the position of the target in the key frame image is located are the target landing points. In this step, in order to facilitate the calculation of the subsequent off-target amount, the target pixel coordinates need to be taken as the target landing points in the key frame image.
As shown in fig. 1, in step S130, a target landing point is identified in a key frame image by using a target mark point, so as to obtain a mark point.
In an exemplary embodiment of the present disclosure, fig. 4 is a flowchart illustrating a step of step S130 in fig. 1 in the present embodiment, and as shown in fig. 4, step S130 performs target mark point identification on a target landing point in a key frame image, and obtaining a mark point includes the following steps:
step 131, finding a target area in the key frame image through color and cross shape information;
step 132, searching salient points in the target area, clustering the salient points, and determining four end points of the target;
step 133, determining whether the four endpoints can be fitted to a parallelogram, if so, the target is a cross target, and identifying the four endpoints as four vertices of a cross target.
In the embodiment, the cross silk target is used as the target mark point, and four vertexes of the cross silk can be directly identified in the process of determining the mark point, so that the method is simple and easy to realize.
As shown in fig. 1, in step S140, a miss distance is obtained by calculating the world coordinates of the target landing point according to the pixel coordinates of the mark point.
In one exemplary embodiment of the present disclosure, the off-target amount refers to the deviation of the actual trajectory of a moving object, such as a missile, from the theoretical trajectory in the target plane. On the basis of targeting the cross-hair target as the marker point, fig. 5 is a flowchart illustrating the step S140 in fig. 1 according to the present embodiment, and as shown in fig. 5, the step S140 includes the following steps:
step S141, calibrating world coordinates of four vertexes in a cross wire target in advance;
step S142, according to the pixel coordinates of four vertexes in the cross target in the key frame image;
step S143, obtaining a homography transformation matrix according to world coordinates and pixel coordinates of four vertexes;
step S144, determining world coordinates of the target landing points according to the pixel coordinates of the target landing points and the homography transformation matrix;
step S145, determining the off-target amount according to the world coordinates of the target landing point.
Fig. 6 is a flowchart of a target tracking-based target miss amount calculation method according to an embodiment of the present disclosure, where the main idea is to first detect a moving target, detect the moving target, and then track the target, and once the moving target is targeted, the form of the moving target must change, and at this time, a target tracking failure occurs, and then the frame immediately before the frame is a key frame image, and at the same time, the target position identified by the previous frame is the target landing point. And then, identifying target mark points on the key frame image, and finally, automatically calculating the off-target quantity according to the identified mark point pixel coordinates and the mark point world coordinates calibrated in advance.
As shown in fig. 6, in step S601, moving object detection is performed. The method adopts a mixed Gaussian model to detect the moving target, and separates the foreground from the background by modeling the background, wherein the foreground is the moving object, thereby achieving the purpose of detecting the moving object.
As shown in fig. 6, in step S602, it is determined whether or not a moving object is detected, and if no moving object is detected, the process returns to step S601 to continue the detection; if a moving object is detected, step S603 is continued.
As shown in fig. 6, in step S603, moving object tracking is performed. The task of moving object tracking is to find the position of the object of interest in each frame of sequence image. The method has similar functions as target detection, but the reliability of moving target tracking is stronger and the speed is faster. The moving target tracking is the key point of the method, and whether the key frame image can be accurately detected mainly depends on whether the moving target tracking result is accurate or not. The target tracking method adopted by the scheme provided by the embodiment is SiamRPN tracking, wherein the SiamRPN network consists of a Siamese network and a RPN (Region Proposal Network), one side is the Siamese network and is used for extracting target characteristic information, and the other side is the RPN network and is used for classification and regression.
Fig. 7 is a schematic flow chart of real-time tracking of the sialrpn provided in the embodiment of the present disclosure, and as shown in fig. 7, the tracking flow basically includes the following parts:
firstly, initially giving position information of a tracked target in a first frame in a tracking sequence image, and then utilizing a Siamese network template branch to extract target characteristic information in the first frameAnd further extracting N sample images around the target position to train a regression model in the RPN network. A given procedure may be manually assigned.
Wherein the SiamRPN tracking may output target recognition match scores, which are ranked. When the matching score is lower than 0.7, the embodiment considers that the target is already targeted, and the previous frame of the frame (i.e. the current frame) is the key frame image, i.e. if the current frame is the output, the key frame image is used as the input of the next step.
Secondly, cutting out the possible image area of the target in the current frame (usually selecting the area around the position of the target in the previous frame), and further extracting the characteristic information from the cut image of the frame by using the Siamese network. The division of the surrounding area may be 3 times the target size, and the range of the wrinkle area may be appropriately adjusted as needed, for example, 2.5 to 4 times the target size.
Again, pairs in the RPN network are utilized、/>And performing classification and regression operation to obtain the frame position information and the score of each anchor point, and further selecting the anchor point with high score as a predicted target position, namely outputting by a network.
Then, judging the confidence of the network prediction information of the current frame, and updating the template information when the confidence is higher than a set threshold T
As shown in fig. 6, in step S604, it is determined whether the moving object is successfully tracked, and if not, the process proceeds to step S605; and if the sequence image is successful, repeating the steps until the sequence image is completely read, and completing target tracking.
As shown in fig. 6, in step S605, a frame previous to the current frame is identified as a key frame image.
As shown in fig. 6, in step S606, the pixel coordinates of the target in the previous frame image are acquired as the target landing points.
As shown in fig. 6, in step S607, target mark point recognition is performed in the key frame image. The target in this embodiment is a cross-hair target, and 4 vertices of the cross-hair can be directly identified without additional placement of marker points for convenience.
Fig. 8 is a schematic diagram of cross hair vertex identification for object detection in an embodiment of the disclosure, where four endpoints are labeled as cross hair vertices as shown in fig. 8. The cross target detection principle mainly comprises the steps of finding out the approximate area of a cross target through color and cross shape information, finding out salient points of the area, clustering the salient points, and accurately finding out four endpoints of the cross target. Finally, whether the four endpoints can be fitted into the parallelogram is judged to confirm that the found target is a cross target. As shown in fig. 8, the target detection and cross hair vertex results of this test data are shown.
As shown in fig. 6, in step S608, the off-target amount is calculated from the target landing pixel coordinates and the world coordinates.
In this embodiment, the off-target calculation is mainly a homography transformation, and the homography transformation matrix represents:
more compact block representation:
where a is a 2 x 2 non-singular matrix (determinant is not equal to zero) and the homography matrix is a homogeneous matrix, although there are 9 elements, only their ratios are significant, so the matrix can be determined from 8 parameters.
In fig. 8, the vertex pixel coordinates of the target cross hair have been automatically obtained by image processing, the world coordinates of the 4 vertices can be calibrated in advance, the corresponding homography transformation matrix can be obtained by the pixel coordinates and the world coordinates, meanwhile, the target landing pixel has been obtained by target tracking in fig. 7, and the target world coordinates, namely the off-target amount, can be obtained according to the target landing pixel coordinates and the homography transformation matrix.
Through the above, the target tracking-based off-target calculation method provided by the present disclosure has the following effects: by detecting and tracking the moving target, the key frame image can be automatically detected and identified without manual intervention; and automatically recognizing the dead center target mark point in the key frame image, and further obtaining the off-target quantity according to the pixel coordinates of the mark point and the world coordinates of the drop point. According to the scheme, the key frame image can be automatically detected and identified, the off-target quantity is automatically calculated according to the pixel coordinates of the target landing points, the pixel coordinates of the mark points and the calibration information, no human intervention is needed in the process, no human intervention is needed, the calculation time of the off-target quantity is shortened, and the target reporting efficiency is improved.
The present disclosure further provides a target tracking-based off-target amount calculating device, and fig. 9 is a schematic diagram illustrating a composition of an off-target amount calculating device provided in another embodiment of the present disclosure, as shown in fig. 9, an off-target amount calculating device 900 includes: a keyframe identification module 910, a drop point determination module 920, a marker point determination module 930, and a miss distance determination module 940.
Wherein the key frame identification module 910 is configured to identify key frame images in the image sequence; the drop point determining module 920 is configured to determine a target drop point based on a target position in the key frame image; the mark point determining module 930 is configured to identify a target mark point from a key frame image, so as to obtain a mark point; the off-target determining module 940 is configured to calculate, according to the pixel coordinates of the mark point and the world coordinates of the target landing point, the off-target amount.
Referring now to FIG. 10, a schematic diagram of a computer system 400 suitable for use in implementing the electronic device of an embodiment of the present application is shown. The electronic device shown in fig. 10 is only an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 10, the computer system 400 includes a Central Processing Unit (CPU) 401, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage portion 407 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of system 400 are also stored. The CPU401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output portion 407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 408 including a hard disk or the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 401.
It should be noted that the storage medium shown in the present application may be a computer readable signal medium or a computer readable medium, or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware. The described units may also be provided in a processor, wherein the names of the units do not in some cases constitute a limitation of the unit itself.
In another aspect, the present disclosure also provides a storage medium that may be included in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device. The storage medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to include the method steps of:
identifying a key frame image in the image sequence;
determining a target drop point based on a target position in the key frame image;
target mark point identification is carried out on the target drop points in the key frame image, so that mark points are obtained;
and calculating according to the pixel coordinates of the mark points and the world coordinates of the target landing points to obtain the off-target quantity.
It should be understood that the above description of the specific embodiments of the present invention is only for illustrating the technical route and features of the present invention, and is for enabling those skilled in the art to understand the present invention and implement it accordingly, but the present invention is not limited to the above-described specific embodiments. All changes or modifications that come within the scope of the appended claims are intended to be embraced therein.

Claims (7)

1. A target tracking-based off-target amount calculating method, comprising:
identifying a key frame image in the image sequence;
determining a target drop point based on a target position in the key frame image;
target mark point identification is carried out on the target drop points in the key frame image, so that mark points are obtained;
calculating according to the pixel coordinates of the mark points and the world coordinates of the target landing points to obtain the off-target quantity;
further comprises:
calibrating world coordinates of four vertexes in a cross wire target in advance;
according to the pixel coordinates of four vertexes in the cross target in the key frame image;
obtaining a homography transformation matrix according to world coordinates and pixel coordinates of the four vertexes;
determining world coordinates of the target falling points according to the pixel coordinates of the target falling points and the homography transformation matrix;
determining the off-target quantity according to the world coordinates of the target drop point;
wherein identifying the key frame image comprises:
detecting a moving target in the image sequence, and determining the moving target;
tracking the moving target;
when the target tracking result in the image sequence identifies that the target is missing, determining a previous frame image of the target missing in the image sequence as a key frame image;
the method comprises the steps of carrying out target mark point identification on target drop points in a key frame image, and obtaining mark points comprises the following steps:
finding a target area in the key frame image through color and cross shape information;
searching salient points in a target area, clustering the salient points, and determining four endpoints of the target;
judging whether the four endpoints can be fitted into a parallelogram, if so, the target is a cross target, and identifying the four endpoints as four vertices of the cross target.
2. The target tracking-based off-target amount calculating method according to claim 1, wherein, when moving target detection is performed in the image sequence, moving target detection is performed on a first frame image in the image sequence using a mixed gaussian model, comprising:
performing background training based on the image sequence, and modeling according to the complete background graph to obtain a background model;
determining a foreground by adopting a mixed Gaussian model method according to the first frame image and the background model;
the foreground is identified as a moving object.
3. The target tracking-based off-target calculation method of claim 1, wherein the moving target is tracked in real time using a SiamRPN network, wherein the SiamRPN network comprises a Siamese network for extracting target feature information; and an RPN network for classification and regression.
4. The target tracking-based off-target calculation method of claim 3, wherein performing target tracking on the moving target in real time using a sialprn network comprises:
calibrating the position of the moving target in the first frame image in the image sequence;
extracting target characteristic information from first frame image by using Siamese network
Training to obtain a regression model of the RPN based on a plurality of samples around the moving target;
cutting a high probability area of a target in a current frame image based on an image sequence to obtain a cut image, wherein the high probability area is a surrounding area of a moving target appearance position in a previous frame image;
extraction of region characteristic information for cropped image by Siamese network
Targeting feature information using RPN networkAnd regional characteristic information->Performing classification and regression operations to obtain the frame position information of each anchor point and the corresponding score;
selecting an anchor point with the highest score from the scores of the anchor points as a predicted target position, and outputting the predicted target position as network prediction information;
judging the confidence coefficient of the network prediction information of the current frame image, and if the confidence coefficient is larger than a preset threshold value, updating the regional characteristic information
Repeating the steps until the image sequence is completely read.
5. A target tracking-based off-target amount calculation apparatus, comprising:
the key frame identification module is used for identifying key frame images in the image sequence, and also used for detecting moving targets in the image sequence and determining the moving targets; tracking the moving target; when the target tracking result in the image sequence identifies that the target is missing, determining a previous frame image of the target missing in the image sequence as a key frame image;
the drop point determining module is used for determining a target drop point based on a target position in the key frame image;
the mark point determining module is used for identifying a target mark point of a target falling point in the key frame image to obtain a mark point and finding a target area in the key frame image through color and cross shape information; searching salient points in a target area, clustering the salient points, and determining four endpoints of the target; judging whether four endpoints can be fitted into a parallelogram, if so, the target is a cross target, and identifying the four endpoints as four vertices of a cross target;
the off-target amount determining module is used for calculating according to the pixel coordinates of the mark points and the world coordinates of the target landing points to obtain off-target amount, and is also used for:
calibrating world coordinates of four vertexes in a cross wire target in advance;
according to the pixel coordinates of four vertexes in the cross target in the key frame image;
obtaining a homography transformation matrix according to world coordinates and pixel coordinates of the four vertexes;
determining world coordinates of the target falling points according to the pixel coordinates of the target falling points and the homography transformation matrix;
and determining the off-target amount according to the world coordinates of the target landing point.
6. An electronic device, comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
7. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-4.
CN202410037752.9A 2024-01-10 2024-01-10 Off-target amount calculating method, device, equipment and storage medium based on target tracking Active CN117553756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410037752.9A CN117553756B (en) 2024-01-10 2024-01-10 Off-target amount calculating method, device, equipment and storage medium based on target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410037752.9A CN117553756B (en) 2024-01-10 2024-01-10 Off-target amount calculating method, device, equipment and storage medium based on target tracking

Publications (2)

Publication Number Publication Date
CN117553756A CN117553756A (en) 2024-02-13
CN117553756B true CN117553756B (en) 2024-03-22

Family

ID=89811409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410037752.9A Active CN117553756B (en) 2024-01-10 2024-01-10 Off-target amount calculating method, device, equipment and storage medium based on target tracking

Country Status (1)

Country Link
CN (1) CN117553756B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108731587A (en) * 2017-04-14 2018-11-02 中交遥感载荷(北京)科技有限公司 A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN109212545A (en) * 2018-09-19 2019-01-15 长沙超创电子科技有限公司 Multiple source target following measuring system and tracking based on active vision
CN109903305A (en) * 2019-01-24 2019-06-18 天津国为信息技术有限公司 Line style target impact point positioning method based on aerial three-dimensional localization
CN110378264A (en) * 2019-07-08 2019-10-25 Oppo广东移动通信有限公司 Method for tracking target and device
CN113819890A (en) * 2021-06-04 2021-12-21 腾讯科技(深圳)有限公司 Distance measuring method, distance measuring device, electronic equipment and storage medium
WO2022036478A1 (en) * 2020-08-17 2022-02-24 江苏瑞科科技有限公司 Machine vision-based augmented reality blind area assembly guidance method
CN115761693A (en) * 2022-11-01 2023-03-07 广汽乘用车有限公司 Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image
CN116977902A (en) * 2023-08-14 2023-10-31 长春工业大学 Target tracking method and system for on-board photoelectric stabilized platform of coastal defense

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108731587A (en) * 2017-04-14 2018-11-02 中交遥感载荷(北京)科技有限公司 A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN109212545A (en) * 2018-09-19 2019-01-15 长沙超创电子科技有限公司 Multiple source target following measuring system and tracking based on active vision
CN109903305A (en) * 2019-01-24 2019-06-18 天津国为信息技术有限公司 Line style target impact point positioning method based on aerial three-dimensional localization
CN110378264A (en) * 2019-07-08 2019-10-25 Oppo广东移动通信有限公司 Method for tracking target and device
WO2022036478A1 (en) * 2020-08-17 2022-02-24 江苏瑞科科技有限公司 Machine vision-based augmented reality blind area assembly guidance method
CN113819890A (en) * 2021-06-04 2021-12-21 腾讯科技(深圳)有限公司 Distance measuring method, distance measuring device, electronic equipment and storage medium
CN115761693A (en) * 2022-11-01 2023-03-07 广汽乘用车有限公司 Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image
CN116977902A (en) * 2023-08-14 2023-10-31 长春工业大学 Target tracking method and system for on-board photoelectric stabilized platform of coastal defense

Also Published As

Publication number Publication date
CN117553756A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11255973B2 (en) Method and apparatus for extracting lane line and computer readable storage medium
US8249320B2 (en) Method, apparatus, and program for measuring sizes of tumor regions
CN112785625B (en) Target tracking method, device, electronic equipment and storage medium
EP2874097A2 (en) Automatic scene parsing
CN109425348B (en) Method and device for simultaneously positioning and establishing image
US20190206135A1 (en) Information processing device, information processing system, and non-transitory computer-readable storage medium for storing program
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
US8571303B2 (en) Stereo matching processing system, stereo matching processing method and recording medium
CN110647886A (en) Interest point marking method and device, computer equipment and storage medium
CN110956100A (en) High-precision map generation method and device, electronic equipment and storage medium
TW201531868A (en) Multiview pruning of feature database for object recognition system
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN110245566B (en) Infrared target remote tracking method based on background features
CN111784737A (en) Automatic target tracking method and system based on unmanned aerial vehicle platform
CN114279433A (en) Map data automatic production method, related device and computer program product
CN103455815A (en) Self-adaptive license plate character segmentation method in complex scene
CN113091757A (en) Map generation method and device
CN111563550A (en) Sperm morphology detection method and device based on image technology
CN112686951A (en) Method, device, terminal and storage medium for determining robot position
US9286543B2 (en) Characteristic point coordination system, characteristic point coordination method, and recording medium
CN112990101B (en) Facial organ positioning method based on machine vision and related equipment
CN117553756B (en) Off-target amount calculating method, device, equipment and storage medium based on target tracking
CN113837044A (en) Organ positioning method based on ambient brightness and related equipment
CN116182831A (en) Vehicle positioning method, device, equipment, medium and vehicle
CN110910379B (en) Incomplete detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant