CN117522926B - Infrared light spot target identification and tracking method based on FPGA hardware platform - Google Patents

Infrared light spot target identification and tracking method based on FPGA hardware platform Download PDF

Info

Publication number
CN117522926B
CN117522926B CN202410022616.2A CN202410022616A CN117522926B CN 117522926 B CN117522926 B CN 117522926B CN 202410022616 A CN202410022616 A CN 202410022616A CN 117522926 B CN117522926 B CN 117522926B
Authority
CN
China
Prior art keywords
current frame
target
light spot
tracking
spots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410022616.2A
Other languages
Chinese (zh)
Other versions
CN117522926A (en
Inventor
褚俊波
李和伦
李东晨
李非桃
高升久
冉欢欢
陈益
陈春
王丹
李毅捷
董平凯
陈未东
杨伟
夏添
罗瀚森
肖枭
何建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Original Assignee
Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Desheng Xinda Brain Intelligence Technology Co ltd filed Critical Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Priority to CN202410022616.2A priority Critical patent/CN117522926B/en
Publication of CN117522926A publication Critical patent/CN117522926A/en
Application granted granted Critical
Publication of CN117522926B publication Critical patent/CN117522926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Abstract

The invention discloses an infrared light spot target identification and tracking method based on an FPGA hardware platform, and belongs to the field of target identification. The method for identifying and tracking the infrared light spot target based on the FPGA hardware platform comprises the following steps: acquiring a video stream, wherein the video stream is acquired by a lens and a camera with a CMOS sensor and a light filter directly arranged on the camera; detecting whether all target light spots are contained in the current frame; when all target light spots are contained in the current frame, if the previous frame of the current frame contains all target light spots, a search window of each target light spot in the current frame is generated; searching in the area corresponding to the search window in the current frame, and if all target light spots are searched, successfully tracking; and outputting the position coordinates of all target spots in the current frame when the tracking is successful. The invention takes hardware description language as a tool to complete the realization of a digital image processing algorithm.

Description

Infrared light spot target identification and tracking method based on FPGA hardware platform
Technical Field
The invention belongs to the field of target identification, and particularly relates to an infrared light spot target identification and tracking method based on an FPGA hardware platform.
Background
The target recognition and tracking application fields are wide, and play an important role in industrial manufacturing and other scenes. The intelligent robot vision system is equivalent to eyes of the system, and the intelligent robot vision system has bionic functions of autonomous positioning, environment recognition, obstacle detection, target tracking and the like. However, conventional identification and tracking systems (using a computer or an embedded platform as a processor) have a problem of poor real-time performance.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an infrared light spot target identification and tracking method based on an FPGA hardware platform.
The aim of the invention is realized by the following technical scheme: the method for identifying and tracking the infrared light spot target based on the FPGA hardware platform is applied to indoor target tracking equipment, and comprises the following steps:
acquiring a video stream, wherein the video stream is acquired by a lens and a camera with a CMOS sensor and a light filter directly arranged on the camera;
detecting whether all target light spots are contained in the current frame;
when all target light spots are contained in the current frame, if the previous frame of the current frame contains all target light spots, a search window of each target light spot in the current frame is generated;
searching in the area corresponding to the search window in the current frame, and if all target light spots are searched, successfully tracking;
and outputting the position coordinates of all target spots in the current frame when the tracking is successful.
Further, detecting whether the current frame contains all target spots includes:
carrying out facula enhancement processing on the current frame;
detecting all light spots in the current frame;
detecting first parameters of all light spots in a current frame;
judging whether the light spot is effective or not according to a first parameter;
it is determined whether the valid spot contains all target spots.
Further, performing spot enhancement processing on the current frame, including:
adjusting the brightness value of a first pixel in the current frame to a first preset value;
adjusting the brightness value of a second pixel in the current frame to a second preset value, wherein the second preset value is larger than the first preset value;
the first pixel is a pixel with a brightness value smaller than or equal to a background noise threshold value in the current frame, and the second pixel is a pixel with a brightness larger than the background noise threshold value in the current frame.
Further, the first parameter includes centroid coordinates of the light spot, maximum and minimum position values of the light spot in the X-axis direction, and maximum and minimum position values of the light spot in the Y-axis direction.
Further, determining whether the light spot is valid according to the first parameter includes:
if the centroid coordinates of the light spot are detected, the light spot is effective, otherwise, the light spot is ineffective.
Further, detecting first parameters of all light spots in the current frame, including:
and carrying out global scanning on the current frame by using a connected domain analysis engine to obtain a first parameter of the light spot.
Further, generating a search window for each target spot in the current frame includes:
when all target light spots are contained in the current frame, if the previous frame of the current frame contains all target light spots, calculating the physical distance between the target light spots and the CMOS sensor;
calculating a first distance, wherein the first distance is a pixel moving distance generated by moving the same target light spot between two adjacent frames of images at the physical distance according to a preset speed;
determining the size of each search window in the current frame according to the first distance;
if the previous frame of the current frame does not contain all the target light spots, determining the central position of the search window as the position of the corresponding target light spot;
if the previous frame of the current frame contains all target light spots, determining the position of each search window in the current frame according to the position of the previous frame and the first distance of the search window;
and generating a search window of each target light spot in the current frame according to the size and the position of the search window.
Further, if all target light spots are contained in the current frame, the tracking state is realized, and otherwise, the tracking state is realized.
Further, when the current frame is in the failure state, whether all target light spots are contained in the current frame is continuously detected.
The beneficial effects of the invention are as follows:
(1) According to the advantages of small size, low power consumption, high speed, flexible configuration, convenient transplanting and the like of the FPGA, the invention provides a multi-facula target recognition and tracking method based on the FPGA, which mainly takes the FPGA and the CMOS sensor as a core platform to carry out hardware design of a visual system, takes a hardware description language as a tool to complete the realization of a digital image processing algorithm, plays an effect of accelerating the algorithm, and finally completes the application and development of a facula target recognition and tracking system;
(2) The method of the invention achieves high real-time performance. Taking four target light spots as an example, generating images with 3840x2160 resolution and 200 frames per second by using a CMOS, extracting effective light spot signals according to a background noise set threshold value to strengthen after receiving the images, performing 3*3 median filtering on the strengthened light spots, removing isolated light spot pixel points, and filling missing pixel points in the light spots; then, carrying out binarization processing on the image to form a binarized image of the light spot, carrying out morphological filtering on the binarized image, and carrying out shaping processing on the edge of the light spot; carrying out light spot searching by using four connected domain search engines, and judging the effectiveness of the light spot according to the boundary and the size of the light spot after searching; under the condition that the four light spots are effective, light spot positioning is carried out, and light spot calibration of A, B, C, D is calculated according to the light spot center value; after calibration is completed, calculating a search window according to the center distance of the AD light spots, and forming search windows corresponding to the A, B, C, D light spots respectively; in the next frame of image, calculating A, B, C, D whether four light spots are in a search window of the image, if so, the image is in an effective tracking state, otherwise, the image is in a tracking failure state; all the above calculation processes are completed within 0.5ms, and the calculation result is output for use by the host computer.
Drawings
FIG. 1 is a flow chart of a method for identifying and tracking an infrared light spot target in the present invention;
FIG. 2 is a schematic diagram of an object of an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described below with reference to the embodiments, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention, based on the embodiments of the present invention.
Referring to fig. 1 to 2, the invention provides an infrared light spot target identification and tracking method based on an FPGA hardware platform, which comprises the following steps:
the infrared light spot target identification and tracking method based on the FPGA hardware platform is applied to indoor target tracking equipment.
As shown in fig. 1, the method for identifying and tracking the infrared light spot target based on the FPGA hardware platform includes S100 to S500.
S100, acquiring a video stream, wherein the video stream is acquired by a camera with a lens and a CMOS sensor directly provided with an optical filter.
The specification of the optical filter can be selected and determined according to actual requirements.
S200, detecting whether all target light spots are contained in the current frame.
In some embodiments, detecting whether all target spots are contained in the current frame includes: carrying out facula enhancement processing on the current frame; detecting all light spots in the current frame; detecting first parameters of all light spots in a current frame, wherein the first parameters comprise centroid coordinates of the light spots, maximum position values and minimum position values of the light spots in the X-axis direction, and maximum position values and minimum position values of the light spots in the Y-axis direction; judging whether the light spot is effective or not according to a first parameter; it is determined whether all target spots are contained in the valid spots.
When judging whether the light spot is effective, the following method can be adopted: if the centroid coordinates of the light spot are detected, the light spot is effective, otherwise, the light spot is ineffective.
And when all the valid light spots in the current frame comprise all the target light spots, the current frame is considered to comprise all the target light spots, otherwise, the current frame is considered to not comprise all the target light spots. If the current frame does not contain all the target light spots, indicating that the target light spots partially or completely move out of the view of the CMOS sensor, and at the moment, the light spot tracking is in a failure state; only if all target spots are contained in the current frame, the tracking state is entered, namely, the follow-up tracking step is carried out.
In some embodiments, a connected domain analysis engine is utilized to perform global scanning on the current frame to obtain a first parameter of the light spot. When a plurality of light spots exist, a plurality of connected domain analysis engines are adopted to carry out global scanning on the image, so that first parameters of each light spot are obtained. For example, if there are three light spots, then three connected domain analysis engines are utilized to perform global scanning, so as to obtain first parameters of the three light spots, namely, the connected domain analysis engines and the light spots are in one-to-one correspondence.
In some embodiments, the spot enhancement process includes: adjusting the brightness value of a first pixel in the current frame to a first preset value; adjusting the brightness value of a second pixel in the current frame to a second preset value, wherein the second preset value is larger than the first preset value; the first pixel is a pixel with a brightness value smaller than or equal to a background noise threshold value in the current frame, and the second pixel is a pixel with a brightness larger than the background noise threshold value in the current frame.
S300, when all target light spots are contained in the current frame, if the previous frame of the current frame contains all target light spots, a search window of each target light spot in the current frame is generated.
In this embodiment, a corresponding search window is generated for each target spot.
In some embodiments, generating a search window for each target spot in the current frame includes: when all target light spots are contained in the current frame, if the previous frame of the current frame contains all target light spots, calculating the physical distance between the target light spots and the CMOS sensor; calculating a first distance, wherein the first distance is a pixel moving distance generated by moving the same target light spot between two adjacent frames of images at the physical distance according to a preset speed; determining the size of each search window in the current frame according to the first distance; if the previous frame of the current frame (i.e., the previous frame of the current frame) does not contain all target light spots, determining the central position of the search window as the position of the corresponding target light spot; if the previous frame of the current frame contains all target light spots, determining the position of each search window in the current frame according to the position of the previous frame and the first distance of the search window; and generating a search window of each target light spot in the current frame according to the size and the position of the search window. That is, in the present embodiment, the search window is not generated when the first frame image including all the target spots is detected, and the search window is generated from when the second frame image including all the target spots is detected.
When calculating the physical distance between the target light spot and the CMOS sensor: if the number of the target light spots is one, calculating the physical distance from the target light spots to the CMOS sensor according to the size of the target light spots in the current frame; if the number of target spots is plural (two or more), the physical distance from the target spot to the CMOS sensor is calculated based on the distance between two of the target spots (in this embodiment, when the number of target spots is plural, the relative position between the target spots remains unchanged).
S400, searching in the area corresponding to the search window in the current frame, and if all target light spots are searched, successfully tracking.
S500, outputting the position coordinates of all target light spots in the current frame when tracking is successful.
The method is described below with reference to a specific example. In this embodiment, the target tracking device (such as a helmet with target recognition and tracking function) is located in the cockpit, and the light source targets are located in 4 900nm infrared LED lamps, which are arranged in a regular triangle, and are respectively located at three vertices and a center point of the triangle, as shown in fig. 2. With a high frame rate, high resolution visible light CMOS sensor, only 900nm infrared light is allowed to enter the system by adding a 900nm filter between the lens and the CMOS sensor.
As the photosensitive efficiency of the CMOS sensor at 900nm is only 9% of the normal wave band, the imaged infrared light spots are rapidly weakened along with the increase of the distance between the infrared LED lamp and the CMOS sensor after imaging, the imaged infrared light spots become uneven, the target distance is judged through the intensity of the infrared light spots, and when the distance is increased, the infrared light spots are subjected to enhancement algorithm processing so as to meet the light spot brightness requirement of subsequent algorithm processing. Specifically, the brightness value of a first pixel in an image frame is adjusted to 0; adjusting the brightness value of the second pixel in the image frame to 245; the first pixels are pixels with brightness values smaller than or equal to a background noise threshold value in the image frame, and the second pixels are pixels with brightness larger than the background noise threshold value in the image frame.
And (3) carrying out global scanning on the image by using 4 connected domain analysis engines, and positioning and outputting centroid coordinates of 4 infrared spots and maximum and minimum position values of the infrared spots in the X, Y direction after finishing scanning. And if the infrared light spot generates effective central data, the infrared light spot is considered to be effective, otherwise, the infrared light spot is considered to be ineffective, and the ineffective infrared light spot is not tracked later.
And judging the physical distance between the infrared light spot A and the infrared light spot D and the CMOS through the distance between the infrared light spot A and the infrared light spot D, and forming a search window with a corresponding size according to the pixel moving distance generated between two frames of images at the preset moving speed of the target at the distance. Searching according to the size of the search window, judging that tracking is successful when the infrared light spot is found, and judging that tracking fails when the infrared light spot is not found.
SearchWindowSize=α*AD distance
Wherein x is A Is the minimum value of the infrared light spot A in the X-axis direction, X D Is the minimum value of the infrared light spot D in the X-axis direction, y A Is the maximum value of the infrared light spot A in the Y-axis direction, Y D For the maximum value of the infrared light spot A in the Y-axis direction, AD distance Searc is the distance between infrared light spot A and infrared light spot D in the current framehWindowSize represents the size of the search window.
And taking the positions of the four light spots A, B, C and D in the previous frame as starting points, adding SearchWindow size up, down, left and right to form a search window, if the positions of the four points A, B, C and D in the current frame are in the search window, tracking successfully, otherwise, tracking fails.
When the infrared light spot partially or fully moves out of the view of the CMOS sensor, the infrared light spot tracking is in a disabled state. After the 4 infrared light spots reenter the vision of the CMOS sensor, repositioning and tracking are automatically carried out, and a tracking state is entered. After entering a tracking state, the infrared light spot position calibration is carried out, 4 light spot real-time position coordinates are output according to the speed of 200 frames/second, and the infrared light spot real-time position coordinates are used by a host computer system.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (8)

1. The method for identifying and tracking the infrared light spot target based on the FPGA hardware platform is applied to indoor target tracking equipment and is characterized by comprising the following steps:
acquiring a video stream, wherein the video stream is acquired by a lens and a camera with a CMOS sensor and a light filter directly arranged on the camera;
detecting whether all target light spots are contained in the current frame;
when all target light spots are contained in the current frame, if the previous frame of the current frame contains all target light spots, a search window of each target light spot in the current frame is generated;
searching in the area corresponding to the search window in the current frame, and if all target light spots are searched, successfully tracking;
outputting the position coordinates of all target light spots in the current frame when the tracking is successful;
generating a search window for each target spot in the current frame, comprising:
when all target light spots are contained in the current frame, if the previous frame of the current frame contains all target light spots, calculating the physical distance between the target light spots and the CMOS sensor;
calculating a first distance, wherein the first distance is a pixel moving distance generated by moving the same target light spot between two adjacent frames of images at the physical distance according to a preset speed;
determining the size of each search window in the current frame according to the first distance;
if the previous frame of the current frame does not contain all the target light spots, determining the central position of the search window as the position of the corresponding target light spot;
if the previous frame of the current frame contains all target light spots, determining the position of each search window in the current frame according to the position of the previous frame and the first distance of the search window;
and generating a search window of each target light spot in the current frame according to the size and the position of the search window.
2. The method for identifying and tracking infrared light spot targets based on the FPGA hardware platform according to claim 1, wherein detecting whether all target light spots are contained in the current frame comprises:
carrying out facula enhancement processing on the current frame;
detecting all light spots in the current frame;
detecting first parameters of all light spots in a current frame;
judging whether the light spot is effective or not according to a first parameter;
it is determined whether the valid spot contains all target spots.
3. The method for identifying and tracking the infrared light spot target based on the FPGA hardware platform according to claim 2, wherein the light spot enhancement processing is performed on the current frame, and the method comprises the following steps:
adjusting the brightness value of a first pixel in the current frame to a first preset value;
adjusting the brightness value of a second pixel in the current frame to a second preset value, wherein the second preset value is larger than the first preset value;
the first pixel is a pixel with a brightness value smaller than or equal to a background noise threshold value in the current frame, and the second pixel is a pixel with a brightness larger than the background noise threshold value in the current frame.
4. The method for identifying and tracking an infrared light spot target based on the FPGA hardware platform according to claim 2, wherein the first parameter includes centroid coordinates of the light spot, maximum and minimum position values of the light spot in an X-axis direction, and maximum and minimum position values of the light spot in a Y-axis direction.
5. The method for identifying and tracking an infrared light spot target based on an FPGA hardware platform according to claim 2, wherein determining whether the light spot is valid according to a first parameter comprises:
if the centroid coordinates of the light spot are detected, the light spot is effective, otherwise, the light spot is ineffective.
6. The method for identifying and tracking infrared light spot targets based on the FPGA hardware platform according to claim 2, wherein detecting the first parameter of all light spots in the current frame comprises:
and carrying out global scanning on the current frame by using a connected domain analysis engine to obtain a first parameter of the light spot.
7. The method for identifying and tracking infrared light spot targets based on the FPGA hardware platform according to claim 1, wherein the target spots are in a tracking state if all target spots are contained in the current frame, and are in a failure state otherwise.
8. The method for identifying and tracking infrared light spots based on the FPGA hardware platform according to claim 7, wherein when the method is in a failure state, whether all the target light spots are contained in the current frame is continuously detected.
CN202410022616.2A 2024-01-08 2024-01-08 Infrared light spot target identification and tracking method based on FPGA hardware platform Active CN117522926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410022616.2A CN117522926B (en) 2024-01-08 2024-01-08 Infrared light spot target identification and tracking method based on FPGA hardware platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410022616.2A CN117522926B (en) 2024-01-08 2024-01-08 Infrared light spot target identification and tracking method based on FPGA hardware platform

Publications (2)

Publication Number Publication Date
CN117522926A CN117522926A (en) 2024-02-06
CN117522926B true CN117522926B (en) 2024-04-02

Family

ID=89748100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410022616.2A Active CN117522926B (en) 2024-01-08 2024-01-08 Infrared light spot target identification and tracking method based on FPGA hardware platform

Country Status (1)

Country Link
CN (1) CN117522926B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010237169A (en) * 2009-03-31 2010-10-21 Topcon Corp Automatic tracking method and surveying device
CN103795467A (en) * 2013-11-05 2014-05-14 深圳光启创新技术有限公司 Method and apparatus for identifying visible light communication signal received by camera
CN106197403A (en) * 2016-08-31 2016-12-07 中国科学院长春光学精密机械与物理研究所 HTEM system gondola multiple spot attitude hot spot imaging measurement method and device
CN106648147A (en) * 2016-12-16 2017-05-10 深圳市虚拟现实技术有限公司 Space positioning method and system for virtual reality characteristic points
CN107506023A (en) * 2017-07-20 2017-12-22 武汉秀宝软件有限公司 A kind of method for tracing and system of metope image infrared ray hot spot
CN115439771A (en) * 2022-07-22 2022-12-06 太原理工大学 Improved DSST infrared laser spot tracking method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110049305B (en) * 2017-12-18 2021-02-26 西安交通大学 Self-correcting method and device for structured light depth camera of smart phone

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010237169A (en) * 2009-03-31 2010-10-21 Topcon Corp Automatic tracking method and surveying device
CN103795467A (en) * 2013-11-05 2014-05-14 深圳光启创新技术有限公司 Method and apparatus for identifying visible light communication signal received by camera
CN106197403A (en) * 2016-08-31 2016-12-07 中国科学院长春光学精密机械与物理研究所 HTEM system gondola multiple spot attitude hot spot imaging measurement method and device
CN106648147A (en) * 2016-12-16 2017-05-10 深圳市虚拟现实技术有限公司 Space positioning method and system for virtual reality characteristic points
CN107506023A (en) * 2017-07-20 2017-12-22 武汉秀宝软件有限公司 A kind of method for tracing and system of metope image infrared ray hot spot
CN115439771A (en) * 2022-07-22 2022-12-06 太原理工大学 Improved DSST infrared laser spot tracking method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"空间激光通信系统光斑小目标跟踪算法研究";陈词等;《遥测遥控》;20220831;全文 *
Hardik,FNU."Object Tracking Implementation on FPGA Platform using CMOS Camera and Servo Motors".《CSU硕士学位论文:http://hdl.handle.net/10211.3/224578》.2023,全文. *

Also Published As

Publication number Publication date
CN117522926A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
JP4464686B2 (en) Real-time eye detection and tracking under various light conditions
CN107369159B (en) Threshold segmentation method based on multi-factor two-dimensional gray level histogram
JP3816887B2 (en) Apparatus and method for measuring length of vehicle queue
CN110543867A (en) crowd density estimation system and method under condition of multiple cameras
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN110189375B (en) Image target identification method based on monocular vision measurement
CN108288289B (en) LED visual detection method and system for visible light positioning
CN111144207A (en) Human body detection and tracking method based on multi-mode information perception
Fang et al. Laser stripe image denoising using convolutional autoencoder
CN108710879B (en) Pedestrian candidate region generation method based on grid clustering algorithm
CN112883986A (en) Static infrared target lamp identification method under complex background
CN112683228A (en) Monocular camera ranging method and device
CN117522926B (en) Infrared light spot target identification and tracking method based on FPGA hardware platform
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
JP5132509B2 (en) Moving object tracking device
CN113592947B (en) Method for realizing visual odometer by semi-direct method
CN115327572A (en) Method for detecting obstacle in front of vehicle
CN114511879A (en) Multisource fusion human body target detection method based on VIS-IR image
CN112417948B (en) Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision
JP7426987B2 (en) Photography system and image processing device
Said et al. Real-time detection and classification of traffic light signals
CN113436252A (en) Pose identification method based on monocular vision
Liu et al. Lane line detection based on OpenCV
JP5047115B2 (en) Moving object tracking device
CN117075730B (en) 3D virtual exhibition hall control system based on image recognition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant