CN112396633B - Target tracking and track three-dimensional reproduction method and device based on single camera - Google Patents

Target tracking and track three-dimensional reproduction method and device based on single camera Download PDF

Info

Publication number
CN112396633B
CN112396633B CN202011136575.8A CN202011136575A CN112396633B CN 112396633 B CN112396633 B CN 112396633B CN 202011136575 A CN202011136575 A CN 202011136575A CN 112396633 B CN112396633 B CN 112396633B
Authority
CN
China
Prior art keywords
camera
target
coordinate system
dimensional
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011136575.8A
Other languages
Chinese (zh)
Other versions
CN112396633A (en
Inventor
杨健
宋红
李敏
王钤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202011136575.8A priority Critical patent/CN112396633B/en
Publication of CN112396633A publication Critical patent/CN112396633A/en
Application granted granted Critical
Publication of CN112396633B publication Critical patent/CN112396633B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

A target tracking and track three-dimensional reproduction method and a device based on a single camera are disclosed, the method comprises the following steps: (1) Calibrating a camera, and establishing a homogeneous transformation matrix of a camera coordinate system, an imaging plane coordinate system and a three-dimensional space coordinate system; (2) Collecting a video image of a region to be monitored by an image collecting card and a camera; (3) extracting characteristic information of the target in the wave gate; (4) Performing relevant filtering operation by using the extracted features and corresponding features in the target wave gate in the search frame; (5) Solving the response values of the target at different positions in the search area; (6) Determining the size and the position of a target frame, and determining the coordinates of a target centroid in an image; (7) Performing coordinate transformation on the centroid coordinates of the target in the step (6) in the image according to the homogeneous transformation obtained in the step (1); (8) Calculating according to the method in the step (7) to obtain the coordinates of the target in the three-dimensional space; and (9) outputting and displaying the calculation result of the step (8).

Description

Target tracking and track three-dimensional reproduction method and device based on single camera
Technical Field
The invention relates to the technical field of target tracking and coordinate conversion, in particular to a target tracking and track three-dimensional reproducing method based on a single camera and a target tracking and track three-dimensional reproducing device based on the single camera.
Background
The target tracking technology based on vision is successfully applied in many fields such as video monitoring, visual navigation and the like at present, the moving track of a target on an image can be obtained by connecting centroids of the target running on the image, and the behavior or intention of the target can be further identified by analyzing the moving track of the target. Therefore, the method has been widely concerned by scholars at home and abroad and related scientific research units.
However, due to the perspective principle in the image imaging process, the motion trajectory of the target in the real three-dimensional space is greatly different from the motion trajectory obtained in the image, so that the intuitive judgment is difficult, and great difficulty is brought to the identification of the target behavior or intention based on the motion trajectory of the target in the image.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a target tracking and track three-dimensional reproduction method based on a single camera, which can realize the rapid tracking of a target and the function of converting the coordinates of the target in an image into three-dimensional space coordinates by using the single camera, so that the further analysis of the target motion becomes simple and easy, and the accuracy of target behavior identification can be increased.
The technical scheme of the invention is as follows: the target tracking and track three-dimensional reproduction method based on the single camera comprises the following steps:
(1) Calibrating a camera, and establishing a homogeneous transformation matrix of a camera coordinate system, an imaging plane coordinate system and a three-dimensional space coordinate system;
(2) Collecting a video image of a region to be monitored by an image collecting card and a camera;
(3) Extracting characteristic information of a target in a wave gate;
(4) Performing relevant filtering operation by using the extracted features and corresponding features in the target wave gate in the search frame;
(5) Calculating response values of the target at different positions in the search area according to the calculation result of the step (4);
(6) Determining the size and the position of a target frame according to the calculation result of the step (5), and determining the coordinates of the target centroid in the image;
(7) Performing coordinate transformation on the centroid coordinate of the target in the image in the step (6) according to the homogeneous transformation obtained in the step (1);
(8) Calculating according to the method provided in the step (7) to obtain the coordinates of the target in the three-dimensional space;
(9) And (4) outputting and displaying the calculation result of the step (8).
The invention realizes the homogeneous transformation relation among the camera coordinate, the imaging plane coordinate system and the three-dimensional space coordinate system by calibrating the camera, realizes the coding and decoding of the video collected by the camera through the video collecting card, and transmits the collected video information to the computer through the appointed interface, and the computer tracks the target by using a target tracking algorithm based on the related filtering and converts the target centroid position from the image coordinate to the three-dimensional space coordinate for displaying, thereby realizing the rapid tracking of the target by using only a single camera, and simultaneously realizing the function of converting the coordinate of the target in the image into the three-dimensional space coordinate, so that the further analysis of the target motion is simple and easy, and the accuracy of the target behavior recognition can be increased.
There is also provided a single-camera based target tracking and trajectory three-dimensional reconstruction apparatus, comprising: the system comprises a camera (11), a video acquisition device (12) and a computer (13);
the camera is a sensing device of photoelectric signals, and maps the optical signals of the area to be monitored onto the electronic imaging device, so that the optical signals are converted into corresponding electric signals, quantized and encoded;
the video acquisition card is used for acquiring and compressing a video coding signal output by the camera and transmitting a target video to the computer for processing according to a specified communication protocol and a transmission interface between the video acquisition card and the computer;
the computer processes the video information obtained by the video acquisition card to obtain the position coordinates of the target in each frame of the video, converts the coordinates of the target in the image into a three-dimensional space according to the calibration result, and displays and outputs the three-dimensional space.
Drawings
FIG. 1 is a connection diagram of hardware devices required for target tracking and three-dimensional trajectory reconstruction in the present invention.
Fig. 2 is a flow chart of the single-camera-based target tracking and trajectory three-dimensional reconstruction method of the present invention.
Fig. 3 is a schematic diagram of the relationship between the coordinate systems used for camera calibration in the present invention.
Fig. 4 is a schematic diagram of the relationship between the imaging plane coordinate system and the pixel coordinate system established by the camera calibration in the present invention.
Detailed Description
As shown in fig. 1, the method for tracking a target and three-dimensionally reproducing a track based on a single camera includes the following steps:
(1) Calibrating a camera, and establishing a homogeneous transformation matrix of a camera coordinate system, an imaging plane coordinate system and a three-dimensional space coordinate system;
(2) Collecting a video image of a region to be monitored by an image collecting card and a camera;
(3) Extracting characteristic information of a target in a wave gate;
(4) Performing relevant filtering operation by using the extracted features and corresponding features in the target wave gate in the search frame;
(5) Calculating response values of the target at different positions in the search area according to the calculation result of the step (4);
(6) Determining the size and the position of the target frame according to the calculation result of the step (5), and determining the coordinates of the target centroid in the image;
(7) Performing coordinate transformation on the centroid coordinate of the target in the image in the step (6) according to the homogeneous transformation obtained in the step (1);
(8) Calculating according to the method provided in the step (7) to obtain the coordinates of the target in the three-dimensional space;
(9) And (4) outputting and displaying the calculation result of the step (8).
The invention realizes the homogeneous transformation relation among the camera coordinate, the imaging plane coordinate system and the three-dimensional space coordinate system by calibrating the camera, realizes the coding and decoding of the video collected by the camera through the video collecting card, and transmits the collected video information to the computer through the appointed interface, and the computer tracks the target by using a target tracking algorithm based on the related filtering and converts the target centroid position from the image coordinate to the three-dimensional space coordinate for displaying, thereby realizing the rapid tracking of the target by using only a single camera, and simultaneously realizing the function of converting the coordinate of the target in the image into the three-dimensional space coordinate, so that the further analysis of the target motion is simple and easy, and the accuracy of the target behavior recognition can be increased.
Preferably, the calibration of the camera in step (1) is to accurately find the internal and external parameters of the camera model, so how to find a new fast and effective camera calibration method remains an important issue in computer vision applications. In the invention, a camera linear imaging model is adopted, a two-step method of Tsai (no corresponding Chinese translation, only English names in other patents are consulted, see patent 'an improved pose estimation method based on a Tsai algorithm', and the patent number is CN 201610188763.2) is taken as a basis, a chessboard plane is adopted to manually measure the position coordinates of a target in a three-dimensional space and the corresponding pixel coordinates in an image coordinate system, a corresponding relation is established, and a Levenberg-Marquardt (Levenberg-Marquardt) method is utilized to identify the coordinate transformation model coefficient.
The imaging model of the camera in the ideal case is considered to be a linear model, as shown in fig. 3. P is the intersection of the connecting line of the optical center O of the camera and the imaging plane of the camera, and the intersection point is P u . From object point P to image point P u The relationship between them can be expressed by the following coordinate transformations:
preferably, the relationship between the camera coordinate system and the three-dimensional space coordinate system in the step (1) is represented by a rotation matrix R and a translation matrix t, and the transformation matrix is formula (1)
Figure BDA0002731880600000051
Its homogeneous coordinate is expressed as formula (2)
Figure BDA0002731880600000052
Wherein t is a three-dimensional translation vector, t = [ t ] x t y t z ] T ,0=[0 0 0] T (ii) a The rotation matrix R is a 3 × 3 unit orthogonal matrix.
Preferably, in the current imaging model of the camera, the imaging plane coordinate system isA two-dimensional rectangular coordinate system with origin O I Z of the camera coordinate system c Intersection of axis with imaging plane, X-axis with X c Parallel, Y-axis and Y c Parallel, OO I The relation between the imaging plane coordinate system and the camera coordinate system in the step (1) is shown as a formula (3) for the focal length f of the camera
Figure BDA0002731880600000053
Expressing formula (3) as formula (4) in homogeneous coordinate and matrix form
Figure BDA0002731880600000054
Establishing a physical coordinate system (O) of the image measured in physical units (mm) 1 Xy), as shown in fig. 4, the image physical coordinate system is also a planar two-dimensional rectangular coordinate system, and the point (x, y) represents the coordinates of the image coordinate system in physical units of millimeters.
Let principal point O 1 Coordinate system in u, v coordinate system (u) 0 ,v 0 ) The physical dimensions of each pixel in the x-axis and y-axis directions of the image physical coordinate system are k and l (in millimeters), respectively.
Preferably, the relation between the coordinate of any pixel in the image of the step (1) in the three-dimensional space coordinate system and the coordinate of any pixel in the imaging plane coordinate system is formula (5)
Figure BDA0002731880600000061
Expressing equation (5) as equation (6) in homogeneous coordinate and matrix form
Figure BDA0002731880600000062
Preferably, in the step (1), the three-dimensional coordinates of the three-dimensional space point and the pixel coordinates of the three-dimensional space point in the image are measured, the formulas (1) to (6) are combined, the mathematical relationship of the formula (7) is established, the internal and external parameters of the camera are obtained, and the calibration of the camera is completed:
Figure BDA0002731880600000063
wherein M is 34 For the camera intrinsic parameter matrix M 1 And an extrinsic parameter matrix M 2 The product of (a).
Preferably, said step (3) is characterized by: depth features, conventional features, or a combination of depth and conventional features of the image; conventional features such as depth features extracted by Vgg-Net networks or Histogram of Oriented Gradients (HOG), or any combination of various features. When a plurality of features are used for combination, the problem of different resolutions of different features needs to be considered, so that interpolation operation is needed to convert features with different resolutions to the same resolution to complete combination without features.
Preferably, in the coordinate transformation in the step (7), when the centroid homogeneous transformation of the target is calculated, the height typical value is set to be 880 millimeters, and the homogeneous transformation is substituted for calculation to obtain the position coordinates of the target in the three-dimensional space.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, in accordance with the method of the present invention, the present invention also includes a single-camera based target tracking and trajectory three-dimensional reconstruction apparatus, which is generally represented in the form of functional blocks corresponding to the steps of the method. The device comprises:
the system comprises a camera (11), a video acquisition device (12) and a computer (13);
the camera is a sensing device of photoelectric signals, and maps the optical signals of the area to be monitored onto the electronic imaging device, so that the optical signals are converted into corresponding electric signals, quantized and encoded;
the video acquisition card is used for acquiring and compressing a video coding signal output by the camera and transmitting a target video to the computer for processing according to a specified communication protocol and a transmission interface between the video acquisition card and the computer;
the computer processes the video information obtained by the video acquisition card to obtain the position coordinates of the target in each frame of the video, converts the coordinates of the target in the image into a three-dimensional space according to the calibration result, and displays and outputs the three-dimensional space.
Preferably, the computer 13 is an X86 architecture computer, arm processing platform, including but not limited to.
Compared with the prior art, the invention has the following remarkable effects: the target tracking and three-dimensional track reappearing method based on the single camera can realize the function of three-dimensional reappearing of the track of the tracked target under the condition of only using the single camera, is convenient for carrying out more in-depth analysis on the target subsequently, is particularly suitable for being applied to complex scenes such as rooms, storehouses, workshops and the like, and has remarkable and good effects.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (10)

1. The target tracking and track three-dimensional reproduction method based on the single camera is characterized by comprising the following steps of: which comprises the following steps:
(1) Calibrating a camera, and establishing a homogeneous transformation matrix of a camera coordinate system, an imaging plane coordinate system and a three-dimensional space coordinate system;
(2) Collecting a video image of a region to be monitored by an image collecting card and a camera;
(3) Extracting characteristic information of a target in a wave gate;
(4) Performing related filtering operation by using the extracted features and corresponding features in the target wave gate in the search frame;
(5) Calculating response values of the target at different positions in the search area according to the calculation result of the step (4);
(6) Determining the size and the position of the target frame according to the calculation result of the step (5), and determining the coordinates of the target centroid in the image;
(7) Performing coordinate transformation on the centroid coordinates of the target in the step (6) in the image according to the homogeneous transformation obtained in the step (1);
(8) Calculating according to the method provided in the step (7) to obtain the coordinates of the target in the three-dimensional space;
(9) And (4) outputting and displaying the calculation result of the step (8).
2. The single-camera based target tracking and trajectory three-dimensional reconstruction method according to claim 1, wherein: and (2) calibrating the camera in the step (1), adopting a camera linear imaging model, manually measuring the position coordinates of the target in a three-dimensional space and the corresponding pixel coordinates in an image coordinate system by adopting a chessboard plane on the basis of a two-step method of Tsai, establishing a corresponding relation, and identifying the coordinate transformation model coefficient by utilizing a Levenberg-Marquardt method.
3. The single-camera based target tracking and trajectory three-dimensional reconstruction method according to claim 2, wherein: the relation between the camera coordinate system and the three-dimensional space coordinate system in the step (1) is expressed by a rotation matrix R and a translation matrix t, and the transformation matrix is a formula (1)
Figure FDA0003950430850000021
Its homogeneous coordinate is expressed as formula (2)
Figure FDA0003950430850000022
Wherein t is a three-dimensional translation vector, t = [ t ] x t y t z ] T ,0=[0 0 0] T (ii) a The rotation matrix R is a 3 × 3 unit orthogonal matrix.
4. The single-camera based target tracking and trajectory three-dimensional reconstruction method according to claim 3, wherein: in the current imaging model of the camera, the imaging plane coordinate system is a two-dimensional rectangular coordinate system, the origin of which is O I Z of the camera coordinate system c Intersection of axis with imaging plane, X-axis with X c Parallel, Y-axis and Y c Parallel, OO I The relation between the imaging plane coordinate system of the step (1) and the camera coordinate system is expressed as the formula (3) for the focal length f of the camera
Figure FDA0003950430850000023
Expressing formula (3) as formula (4) in homogeneous coordinate and matrix form
Figure FDA0003950430850000024
5. The single-camera based target tracking and trajectory three-dimensional reconstruction method according to claim 4, wherein: the relation of the coordinate of any pixel in the image of the step (1) in the three-dimensional space coordinate system and the imaging plane coordinate system is a formula (5)
Figure FDA0003950430850000031
Expressing equation (5) as equation (6) in homogeneous coordinate and matrix form
Figure FDA0003950430850000032
6. The single-camera based target tracking and trajectory three-dimensional reconstruction method according to claim 5, wherein: the step (1) establishes the mathematical relationship of the formula (7) by measuring the three-dimensional coordinates of the three-dimensional space points and the pixel coordinates thereof in the image, obtains the internal and external parameters of the camera, and finishes the calibration of the camera:
Figure FDA0003950430850000033
wherein M is 34 For the camera intrinsic parameter matrix M 1 With an extrinsic parameter matrix M 2 The product of (a) and (b).
7. The single-camera based target tracking and trajectory three-dimensional reconstruction method according to claim 6, wherein: the step (3) is characterized in that: depth features, conventional features, or a combination of depth and conventional features of the image; when a plurality of features are used for combination, interpolation operation is carried out to convert the features with different resolutions to the same resolution to complete the combination between the different features.
8. The single-camera based target tracking and trajectory three-dimensional reconstruction method according to claim 7, wherein: and (4) performing coordinate transformation in the step (7), setting the height typical value to be 880 millimeters when calculating the homogeneous transformation of the centroid of the target, and performing calculation by introducing the homogeneous transformation to obtain the position coordinates of the target in the three-dimensional space.
9. The single-camera-based target tracking and trajectory three-dimensional reproduction apparatus, which is implemented according to the single-camera-based target tracking and trajectory three-dimensional reproduction method of claim 1, characterized in that: it includes: the system comprises a camera (11), a video acquisition device (12) and a computer (13);
the camera is a sensing device of photoelectric signals, and maps the optical signals of the area to be monitored onto the electronic imaging device, so that the optical signals are converted into corresponding electric signals, quantized and encoded;
the video acquisition card is used for acquiring and compressing a video coding signal output by the camera and transmitting a target video to the computer for processing according to a specified communication protocol and a transmission interface between the video acquisition card and the computer;
the computer processes the video information obtained by the video acquisition card to obtain the position coordinates of the target in each frame of the video, converts the coordinates of the target in the image into a three-dimensional space according to the calibration result, and displays and outputs the three-dimensional space.
10. The apparatus of claim 9, wherein: the computer (13) is an X86 architecture computer or Arm processing platform.
CN202011136575.8A 2020-10-19 2020-10-19 Target tracking and track three-dimensional reproduction method and device based on single camera Active CN112396633B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011136575.8A CN112396633B (en) 2020-10-19 2020-10-19 Target tracking and track three-dimensional reproduction method and device based on single camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011136575.8A CN112396633B (en) 2020-10-19 2020-10-19 Target tracking and track three-dimensional reproduction method and device based on single camera

Publications (2)

Publication Number Publication Date
CN112396633A CN112396633A (en) 2021-02-23
CN112396633B true CN112396633B (en) 2023-02-28

Family

ID=74597127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011136575.8A Active CN112396633B (en) 2020-10-19 2020-10-19 Target tracking and track three-dimensional reproduction method and device based on single camera

Country Status (1)

Country Link
CN (1) CN112396633B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187916B (en) * 2022-09-13 2023-02-17 太极计算机股份有限公司 Method, device, equipment and medium for preventing and controlling epidemic situation in building based on space-time correlation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268480A (en) * 2013-05-30 2013-08-28 重庆大学 System and method for visual tracking
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method
CN106485735A (en) * 2015-09-01 2017-03-08 南京理工大学 Human body target recognition and tracking method based on stereovision technique
CN111339831A (en) * 2020-01-23 2020-06-26 深圳市大拿科技有限公司 Lighting lamp control method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481284A (en) * 2017-08-25 2017-12-15 京东方科技集团股份有限公司 Method, apparatus, terminal and the system of target tracking path accuracy measurement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268480A (en) * 2013-05-30 2013-08-28 重庆大学 System and method for visual tracking
CN106485735A (en) * 2015-09-01 2017-03-08 南京理工大学 Human body target recognition and tracking method based on stereovision technique
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method
CN111339831A (en) * 2020-01-23 2020-06-26 深圳市大拿科技有限公司 Lighting lamp control method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种动态融合的目标识别与跟踪算法;章奇等;《科学技术与工程》;20060730(第19期);全文 *
基于改进Camshift的穿墙雷达运动人体目标成像跟踪算法;李松林等;《计算机应用》;20180210(第02期);第528-533页 *

Also Published As

Publication number Publication date
CN112396633A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN108839016B (en) Robot inspection method, storage medium, computer equipment and inspection robot
CN101383899A (en) Video image stabilizing method for space based platform hovering
CN110991360B (en) Robot inspection point position intelligent configuration method based on visual algorithm
CN109934873B (en) Method, device and equipment for acquiring marked image
CN112132908A (en) Camera external parameter calibration method and device based on intelligent detection technology
CN112396633B (en) Target tracking and track three-dimensional reproduction method and device based on single camera
CN111598172B (en) Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion
CN114187330A (en) Structural micro-amplitude vibration working mode analysis method based on optical flow method
CN112033408A (en) Paper-pasted object space positioning system and positioning method
CN113221805B (en) Method and device for acquiring image position of power equipment
CN110827355B (en) Moving target rapid positioning method and system based on video image coordinates
CN113763466A (en) Loop detection method and device, electronic equipment and storage medium
CN109410272B (en) Transformer nut recognition and positioning device and method
CN114872055B (en) SCARA robot assembly control method and system
CN116310263A (en) Pointer type aviation horizon instrument indication automatic reading implementation method
CN110807416A (en) Digital instrument intelligent recognition device and method suitable for mobile detection device
CN114494427A (en) Method, system and terminal for detecting illegal behavior of person standing under suspension arm
CN114184127A (en) Single-camera target-free building global displacement monitoring method
CN111399634B (en) Method and device for recognizing gesture-guided object
Li et al. The application of image based visual servo control system for smart guard
CN109855534B (en) Method, system, medium and equipment for judging position of chassis handcart of switch cabinet
CN111462171A (en) Mark point detection tracking method
Yao et al. Identity and body temperature detection system based on image registration
CN106780312B (en) Image space and geographic scene automatic mapping method based on SIFT matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant