CN117011296A - Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod - Google Patents

Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod Download PDF

Info

Publication number
CN117011296A
CN117011296A CN202311278682.8A CN202311278682A CN117011296A CN 117011296 A CN117011296 A CN 117011296A CN 202311278682 A CN202311278682 A CN 202311278682A CN 117011296 A CN117011296 A CN 117011296A
Authority
CN
China
Prior art keywords
image
target
frame
video
center point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311278682.8A
Other languages
Chinese (zh)
Inventor
孙胜春
鞠汉青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Tongshi Optoelectronic Technology Co ltd
Original Assignee
Changchun Tongshi Optoelectronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Tongshi Optoelectronic Technology Co ltd filed Critical Changchun Tongshi Optoelectronic Technology Co ltd
Priority to CN202311278682.8A priority Critical patent/CN117011296A/en
Publication of CN117011296A publication Critical patent/CN117011296A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging

Abstract

A method, equipment and storage medium for quickly detecting tracking precision of a photoelectric pod belong to the technical field of image processing and solve the problems of low speed, low precision, low efficiency and great influence on subjective feeling of human eyes of the existing method for detecting the tracking precision of the photoelectric pod. The key points of the invention are as follows: reading a video file and preprocessing the video file to obtain a preprocessed video file; cutting out a section of effective video from the preprocessed video file, and obtaining each frame of image of the effective video; acquiring a target center point coordinate of each frame of image according to a target quick heart finding algorithm based on image gray scale; obtaining pixel offset according to the coordinates of the target center point of each frame of image; obtaining a tracking error index according to the pixel offset; and measuring tracking precision according to the tracking error index. The method is suitable for rapidly detecting the tracking precision scene of the photoelectric pod.

Description

Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to the technical field of photoelectric investigation and monitoring.
Background
With the development of photoelectric information technology and unmanned systems, photoelectric pods have been developed as important hardware systems for aircraft. Because the platform is provided with detection equipment such as a visible light camera, an infrared thermal imager, a laser range finder and the like, the platform can acquire stable image information day and night, and the functions of target ranging, target positioning, automatic tracking and the like are realized.
To date, the photoelectric pod has become an important component in the photoelectric reconnaissance field, and is also core equipment for unmanned system reconnaissance, and the photoelectric pod is applied to penetrate into various fields such as public security, fire protection, frontier defense inspection, forest fire protection, emergency relief and agriculture. With the deep application of new generation information technology, the photoelectric pod is continuously developed towards the directions of intellectualization, miniaturization, integration and the like.
Along with the development of the intelligent photoelectric pod, the requirements on the target tracking function are also higher and higher. The tracking precision is an important index for measuring whether the photoelectric pod target is well tracked or not, the human eye detection can only give out the evaluation of the well tracked or poor tracked according to subjective feelings, the tracking result cannot be digitally evaluated, the existing method based on image or image rank summation often generates false detection when detecting the coordinates of the center point of the target, and the positioned center point deviates from the center of the target cross wire, so that the calculation result is inaccurate.
Disclosure of Invention
The invention aims to solve the problems of low speed, low precision, low efficiency and great influence by subjective feeling of human eyes of the existing photoelectric pod tracking precision detection method.
A method for rapidly detecting tracking precision of an optoelectronic pod comprises the following steps:
reading a video file and preprocessing the video file to obtain a preprocessed video file;
cutting out a section of effective video from the preprocessed video file, and obtaining each frame of image of the effective video;
acquiring a target center point coordinate of each frame of image according to a target quick heart finding algorithm based on image gray scale;
obtaining pixel offset according to the coordinates of the target center point of each frame of image;
obtaining a tracking error index according to the pixel offset;
and measuring tracking precision according to the tracking error index.
Further, the preprocessing includes: converting the video file into a single-channel gray video;
further, the effective video is not less than 300 frames;
further, the target fast-core algorithm based on image gray scale comprises:
cutting the video to remove irrelevant background and reserving a target area;
binarizing the image;
positioning the coordinates of the central point;
further, the cut video removes irrelevant background, and the width and the height of the cut image are 0.8-1 Xm;
further, the pixel area covered by the target cross in the target area is m×m;
further, the pixel offset includes: and the pixel offset between the target center point coordinate of each frame of image and the target center point coordinate of the first frame of image and the target center point pixel offset of the whole effective video.
The invention also provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program, and when the processor runs the computer program stored in the memory, the processor executes the method for quickly detecting the tracking precision of the photoelectric pod.
The invention also provides a computer readable storage medium for storing a computer program, and the computer program is used for executing the method for quickly detecting tracking precision of the photoelectric pod.
An optoelectronic pod tracking accuracy rapid detection system comprising:
and a pretreatment module: the method comprises the steps of reading a video file and preprocessing the video file to obtain a preprocessed video file;
and an intercepting module: the method comprises the steps of capturing a section of effective video from the preprocessed video file, and obtaining each frame of image of the effective video;
and a core searching module: the target center point coordinate of each frame of image is obtained according to a target quick heart finding algorithm based on image gray scale;
the calculation module: the method comprises the steps of obtaining pixel offset according to the coordinates of a target center point of each frame of image and obtaining a tracking error index according to the pixel offset;
and a measurement module: and the tracking precision is measured according to the tracking error index.
Compared with the prior art, the invention has the following technical effects:
the invention provides a target quick heart-finding algorithm, which can quickly and accurately locate the center point coordinates of a target cross wire, thereby effectively solving the problem of low calculation tracking precision caused by inaccurate target center point location in the prior art, and the measurement precision of the invention is as high as 89.7%.
The invention provides a tracking error index, calculates and measures the tracking precision of the photoelectric pod, and comprehensively considers the influence of the field angle, the pixel size of the detector and the target pixel offset on the tracking precision.
The invention can numerically measure the tracking precision of the photoelectric pod, and avoid that human eye detection can only give out fuzzy evaluation of tracking quality.
The method is suitable for rapidly detecting the tracking precision scene of the photoelectric pod.
Drawings
FIG. 1 is a flowchart of a method for quickly detecting tracking accuracy of an optoelectronic pod according to an embodiment;
FIG. 2 is a pre-processed image according to the fourth embodiment;
fig. 3 is a pre-processed image and calculated target center point as described in embodiment four.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
the present embodiment will be described with reference to fig. 1.
The method for rapidly detecting the tracking precision of the photoelectric pod in the embodiment comprises the following steps:
reading a video file and preprocessing the video file to obtain a preprocessed video file;
cutting out a section of effective video from the preprocessed video file, and obtaining each frame of image of the effective video;
acquiring a target center point coordinate of each frame of image according to a target quick heart finding algorithm based on image gray scale;
obtaining pixel offset according to the coordinates of the target center point of each frame of image;
obtaining a tracking error index according to the pixel offset;
and measuring tracking precision according to the tracking error index.
Specifically:
the video file format is MP4 or AVI format.
The pretreatment comprises the following steps: and converting the video file into a single-channel gray scale video.
The number of frames of the truncated effective video is 300 frames or not less than 300 frames.
Embodiment two:
the present embodiment is a further example of the target fast-centering algorithm based on image gray scale described in the method for fast detecting tracking accuracy of an optoelectronic pod according to the first embodiment.
The target rapid searching algorithm based on the image gray scale in the embodiment comprises the following steps:
cutting the video to remove irrelevant background and reserving a target area;
binarizing the image;
and positioning the coordinates of the central point.
Wherein: the video is cut to remove irrelevant background, the target area is reserved, the calculated amount can be reduced, and the calculation speed is improved;
the cutting video is used for removing irrelevant background, the pixel area covered by the target in the image is m multiplied by m, the width and the height of the cut image are 80% -100% -m, wherein m is the side length of the square pixel area covered by the target, namely the number of covered pixels in the width or height direction.
Image binarization can whiten target cross filaments and darken other areas, and the step can be manually thresholded.
Positioning the center point coordinates is completed by searching the left half high point and the right half high point of the image row sum, the image column sum.
Embodiment III:
the present embodiment further illustrates a pixel offset obtained according to the coordinates of the target center point of each frame of image in the method for quickly detecting tracking accuracy of an optoelectronic pod according to the first embodiment.
The pixel offset in this embodiment includes a pixel offset between the target center point coordinate of each frame of image and the target center point coordinate of the first frame of image and a target center point pixel offset of the whole effective video.
Specifically:
the method for calculating the pixel offset in the horizontal or vertical direction is the average value of the absolute values of the coordinate differences of the central points between the image frames in the direction;
the pixel offset of the whole effective video is calculated by the arithmetic square root of the sum of the squares of the offsets in the horizontal and vertical directions.
Embodiment four:
the present embodiment will be described with reference to fig. 2 and 3.
The present embodiment is a further example of the method for quickly detecting tracking accuracy of an optoelectronic pod according to the first to third embodiments.
The method for rapidly detecting the tracking precision of the photoelectric pod in the embodiment comprises the following specific steps:
step 1: reading a video file and preprocessing:
reading original video data from a video file; and then preprocessing the original video, converting the original video into a single-channel gray video, and displaying the preprocessed image as shown in fig. 2.
Step 2: intercepting a valid video:
the read video generally comprises video frames before tracking is started or after tracking is stopped, the image frames belong to useless data, the useless data need to be removed before tracking precision detection is carried out, a section of stable and tracked effective video is cut out from the preprocessed video, and the cut-out video is generally not less than 300 frames (30 fps 10 s).
Step 3: calculating the center point coordinates of the target cross hair in the image:
in order to quickly and accurately find the coordinates of the central point of the target cross hair in the tracking video, a target quick heart-finding algorithm based on image gray level is provided, the algorithm uses the thought of spectrum curve peak finding to quickly position the horizontal and vertical coordinates of the central point of the target cross hair, and the specific algorithm flow is as follows:
3.1: clipping the image removes the background, leaving only the surrounding areas of the target:
the image is subjected to region clipping, irrelevant backgrounds are removed, surrounding regions of targets are reserved, influence of the irrelevant backgrounds on detection results is reduced, calculated amount is reduced, and algorithm efficiency is improved;
for an original image of one frame, the resolution is w×h, wherein w is the pixel column number, h is the pixel row number, and the pixel area covered by the target cross silk is m×m, so that the width and the height of the cut image are generally 80% -100% ×m;
3.2: binarizing the image to whiten target cross filaments and darken other areas:
because the infrared image is divided into a black heat mode and a white heat mode, two corresponding image binarization modes are also provided, namely, THRESH_BINARY and THRESH_BINARY_INV.
Wherein the binarization operation of thresh_binary can be expressed as
(1)
Wherein src (x, y) is the pixel gray value of the original image of the (x, y), dst (x, y) is the pixel gray value after the binarization operation of the (x, y), and thresh is the set binarization operation threshold.
If the pixel gray value of the original image is larger than the threshold value of thresh, setting the binarized pixel gray value to 255; otherwise the pixel gray value is set to 0.
The binarization operation of THRESH_BINARY_INV can be expressed as
(2)
Wherein src (x, y) is the pixel gray value of the original image of the (x, y), dst (x, y) is the pixel gray value after the binarization operation of the (x, y), and thresh is the set binarization operation threshold.
If the pixel gray value of the original image is larger than the threshold value of thresh, setting the binarized pixel gray value to 0; otherwise the pixel gray value is set to 255.
3.3: calculating the ordinate of the center point of the target:
calculating the line sum of the binary image, and drawing the line number-line sum curve of the binary image, wherein the line number-line sum curve is a square wave curve similar to a convex shape;
calculating the maximum value MaxRowSum of the line sum, traversing the line sum according to the line number, and respectively searching the points of the line sum closest to 50% of the MaxRowSum at two sides of the highest point of the curve to serve as the left half highest point #,/>) And right half high point (/ -)>,/>) Wherein->Line number for left half high point, +.>Line number for right half high point, +.>Is the row sum of the left half high points, +.>Is the row sum of the right half high points; the row and relationship of the left half high point to its left and right two points can be expressed as:
(3)
wherein,and->Respectively the row sum of the left half high point and the right half high point.
The row and relationship of the right half-height point to its left and right two points can be expressed as:
(4)
wherein,and->Respectively the row sum of the left half high point and the right half high point.
(3) Calculating the ordinate of the center point of the target, the ordinate of the center point may be expressed as
(5)
(4) Calculating the column sum of the binary image, and drawing the column number-column sum curve of the binary image, wherein the column number-column sum curve is also a square wave curve similar to a convex shape;
(5) Calculating the maximum value MaxColSum of the column sum, asTraversing the column sums by column numbers, and respectively searching a column sum closest to 50% of MaxColSum points at two sides of the highest point of the curve to be used as a left half high point [,/>) And right half high point (/ -)>,/>) WhereinColumn number for left half high point, +.>Column number for right half high point, +.>Is the column sum of the left half high points, +.>Is the column sum of the right half high points; the column sum relationship of the left half high point and the left and right two points thereof can be expressed as:
(6)
wherein,and->Respectively the sum of the left half high point and the right half high point.
The column sum relationship of the right half high point and the left and right two points thereof can be expressed as:
(7)
wherein,and->Respectively the sum of the left half high point and the right half high point.
(6) Calculating the abscissa of the target center point, which can be expressed as:
(8)
the preprocessed image and the calculated center point of the target are shown in fig. 3.
Step 4: calculating coordinates of a target cross wire central point in an image frame by adopting the target rapid heart finding algorithm in the step 3, and calculating pixel offset with the coordinates of the first frame:
4.1: traversing each frame of image of the effective video intercepted in the step 2, performing the operation of the step 3 on each frame, and calculating to obtain the coordinates of the center point of the target cross wire in each frame of image;
wherein the coordinates of the center point of the target cross wire of the ith frame image can be expressed as follows:
(9)
(10)
wherein,for the abscissa of the center point of the i-th frame image target,/-for the target>The left upper corner of the effective area after the clipping in the step 3 corresponds to the abscissa before clipping, and is +.>Calculated for the ith frame step 3 imageThe abscissa of the center point of the target cross wire; />Ordinate of target center point for ith frame of image, +.>The upper left corner of the effective area after the cutting in the step 3 corresponds to the ordinate before the cutting,the ordinate of the center point of the target cross hair is calculated for the image of step 3 of the i-th frame.
4.2: calculating the pixel offset between the coordinates of the cross-hair central point of each frame of target and the coordinates of the central point of the first frame of target, wherein the pixel offset of the central point of the jth frame of target can be expressed as
(11)
(12)
Wherein, the method comprises the following steps of,/>)、(/>,/>) The actual coordinates of the center point of the targets of the first and j frames,pixel offset for the j-th frame target center point abscissa, +.>Pixel offset as the ordinate of the center point of the jth frame target;
4.3: calculating the target center point pixel offset of the whole effective video: the target center point pixel offset for the entire active video segment can be expressed as
(13)
(14)
(15)
Wherein,、/>pixel offset of the horizontal and vertical coordinates of the center point of the target cross wire; n is the number of frames of the intercepted effective video; />The pixel offset of the cross silk center point of the whole effective video target is calculated; k represents a kth frame image; />The center point abscissa of the k-th frame image target; />The abscissa of the center point of the 1 st frame image target; />The ordinate of the center point of the target image of the kth frame; />The ordinate of the center point of the target is the 1 st frame image.
Step 5: the pixel offset is converted into tracking error, in order to measure the quality of tracking precision, a concept of tracking error index is provided, and the calculation formula of the tracking error index is as follows
(16)
Wherein,is a tracking error index; angle is the current field angle of the active video; s is the pixel size of the detector; />Is a constant value for the conversion coefficient.
The larger the pixel offset of the tracking video, the larger the field angle, the larger the pixel size of the detector, and the larger the tracking error index. The smaller the final calculated tracking error index, the higher the representative tracking accuracy.
The three sets of test results for the experiment of this embodiment are shown in the following table:
table 1 experimental test results

Claims (10)

1. The method for rapidly detecting the tracking precision of the photoelectric pod is characterized by comprising the following steps of:
reading a video file and preprocessing the video file to obtain a preprocessed video file;
cutting out a section of effective video from the preprocessed video file, and obtaining each frame of image of the effective video;
acquiring a target center point coordinate of each frame of image according to a target quick heart finding algorithm based on image gray scale;
obtaining pixel offset according to the coordinates of the target center point of each frame of image;
obtaining a tracking error index according to the pixel offset;
and measuring tracking precision according to the tracking error index.
2. The method for quickly detecting tracking accuracy of an optoelectronic pod according to claim 1, wherein the preprocessing comprises: and converting the video file into a single-channel gray scale video.
3. The method for rapidly detecting tracking accuracy of an optoelectronic pod according to claim 1, wherein the effective video is not less than 300 frames.
4. The method for quickly detecting tracking accuracy of an optoelectronic pod according to claim 1, wherein the target quick-finding algorithm based on image gray scale comprises:
cutting the video to remove irrelevant background and reserving a target area;
binarizing the image;
and positioning the coordinates of the central point.
5. The method for quickly detecting tracking precision of an optoelectronic pod according to claim 4, wherein the clipping video removes irrelevant background, and the width and the height of the clipped image are 0.8-1 Xm.
6. The method for quickly detecting tracking accuracy of an optoelectronic pod according to claim 4, wherein the pixel area covered by the target cross in the target area is m×m.
7. The method for quickly detecting tracking accuracy of an optoelectronic pod according to claim 1, wherein the pixel offset comprises: and the pixel offset between the target center point coordinate of each frame of image and the target center point coordinate of the first frame of image and the target center point pixel offset of the whole effective video.
8. A computer device, characterized by: the computer device comprises a memory and a processor, wherein the memory stores a computer program, and when the processor runs the computer program stored in the memory, the processor executes a method for quickly detecting tracking accuracy of an optoelectronic pod according to any one of claims 1 to 7.
9. A computer-readable storage medium, characterized by: the computer readable storage medium is used for storing a computer program, and the computer program executes the method for quickly detecting tracking accuracy of the optoelectronic pod according to any one of claims 1 to 7.
10. An optoelectronic pod tracking accuracy rapid detection system, the system comprising:
and a pretreatment module: the method comprises the steps of reading a video file and preprocessing the video file to obtain a preprocessed video file;
and an intercepting module: the method comprises the steps of capturing a section of effective video from the preprocessed video file, and obtaining each frame of image of the effective video;
and a core searching module: the target center point coordinate of each frame of image is obtained according to a target quick heart finding algorithm based on image gray scale;
the calculation module: the method comprises the steps of obtaining pixel offset according to the coordinates of a target center point of each frame of image and obtaining a tracking error index according to the pixel offset;
and a measurement module: and the tracking precision is measured according to the tracking error index.
CN202311278682.8A 2023-10-07 2023-10-07 Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod Pending CN117011296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311278682.8A CN117011296A (en) 2023-10-07 2023-10-07 Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311278682.8A CN117011296A (en) 2023-10-07 2023-10-07 Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod

Publications (1)

Publication Number Publication Date
CN117011296A true CN117011296A (en) 2023-11-07

Family

ID=88576582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311278682.8A Pending CN117011296A (en) 2023-10-07 2023-10-07 Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod

Country Status (1)

Country Link
CN (1) CN117011296A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150103887A (en) * 2014-03-04 2015-09-14 국방과학연구소 Target position estimation equipment using imaging sensors for analyzing an accuracy of target tracking method
CN107478450A (en) * 2016-06-07 2017-12-15 长春理工大学 A kind of tracking accuracy detecting system with dynamic simulation target simulation function
CN112381190A (en) * 2020-11-03 2021-02-19 中交第二航务工程局有限公司 Cable force testing method based on mobile phone image recognition
CN112465871A (en) * 2020-12-07 2021-03-09 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Method and system for evaluating accuracy of visual tracking algorithm
CN112764012A (en) * 2020-12-23 2021-05-07 武汉高德红外股份有限公司 Photoelectric pod tracking simulation test and system
CN113108811A (en) * 2021-04-08 2021-07-13 西安应用光学研究所 Photoelectric turret tracking precision automatic analysis and calculation device
CN114002706A (en) * 2021-10-29 2022-02-01 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Measuring method and device of photoelectric sight-stabilizing measuring system and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150103887A (en) * 2014-03-04 2015-09-14 국방과학연구소 Target position estimation equipment using imaging sensors for analyzing an accuracy of target tracking method
CN107478450A (en) * 2016-06-07 2017-12-15 长春理工大学 A kind of tracking accuracy detecting system with dynamic simulation target simulation function
CN112381190A (en) * 2020-11-03 2021-02-19 中交第二航务工程局有限公司 Cable force testing method based on mobile phone image recognition
CN112465871A (en) * 2020-12-07 2021-03-09 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Method and system for evaluating accuracy of visual tracking algorithm
CN112764012A (en) * 2020-12-23 2021-05-07 武汉高德红外股份有限公司 Photoelectric pod tracking simulation test and system
CN113108811A (en) * 2021-04-08 2021-07-13 西安应用光学研究所 Photoelectric turret tracking precision automatic analysis and calculation device
CN114002706A (en) * 2021-10-29 2022-02-01 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Measuring method and device of photoelectric sight-stabilizing measuring system and computer equipment

Similar Documents

Publication Publication Date Title
CN108764071B (en) Real face detection method and device based on infrared and visible light images
US11024052B2 (en) Stereo camera and height acquisition method thereof and height acquisition system
CN112102409B (en) Target detection method, device, equipment and storage medium
CN108197604A (en) Fast face positioning and tracing method based on embedded device
US20200057886A1 (en) Gesture recognition method and apparatus, electronic device, and computer-readable storage medium
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
WO2022262054A1 (en) Image processing method, apparatus and device, and storage medium
CN111275040B (en) Positioning method and device, electronic equipment and computer readable storage medium
CN116416577B (en) Abnormality identification method for construction monitoring system
CN110428442B (en) Target determination method, target determination system and monitoring security system
CN111445531A (en) Multi-view camera navigation method, device, equipment and storage medium
CN108710841A (en) A kind of face living body detection device and method based on MEMs infrared sensor arrays
CN108520255B (en) Infrared weak and small target detection method and device
CN110378934A (en) Subject detection method, apparatus, electronic equipment and computer readable storage medium
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN115965773B (en) Gas leakage detection system based on big data of Internet of things
CN110688926B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN117218026A (en) Infrared image enhancement method and device
CN117011296A (en) Method, equipment and storage medium for quickly detecting tracking precision of photoelectric pod
CN113409334B (en) Centroid-based structured light angle point detection method
CN115713620A (en) Infrared small target detection method and device, computing equipment and storage medium
JP2005208023A (en) Target-detecting apparatus
CN115345845A (en) Feature fusion smoke screen interference efficiency evaluation and processing method based on direction gradient histogram and electronic equipment
CN114782529A (en) High-precision positioning method and system for line grabbing point of live working robot and storage medium
CN112488076A (en) Face image acquisition method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination