CN111273701A - Visual control system and control method for holder - Google Patents

Visual control system and control method for holder Download PDF

Info

Publication number
CN111273701A
CN111273701A CN202010128998.9A CN202010128998A CN111273701A CN 111273701 A CN111273701 A CN 111273701A CN 202010128998 A CN202010128998 A CN 202010128998A CN 111273701 A CN111273701 A CN 111273701A
Authority
CN
China
Prior art keywords
target
coordinate system
angle
measured
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010128998.9A
Other languages
Chinese (zh)
Other versions
CN111273701B (en
Inventor
张云志
梁尧森
李嘉滔
吴泳庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN202010128998.9A priority Critical patent/CN111273701B/en
Publication of CN111273701A publication Critical patent/CN111273701A/en
Application granted granted Critical
Publication of CN111273701B publication Critical patent/CN111273701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual control system and a control method of a holder, wherein the control method comprises the steps of obtaining environmental information to obtain a visual image; identifying a target to be detected from the visual image; calculating the three-dimensional coordinates of the target to be measured in a camera coordinate system; calculating a visual control angle according to the three-dimensional coordinate of the target to be measured in the camera coordinate system; calculating the three-dimensional coordinates of the target to be measured in a ground coordinate system; calculating a prediction compensation angle according to the three-dimensional coordinate of the target to be measured in the ground coordinate system; and controlling a holder motor by using a fuzzy control algorithm according to the visual control angle and the prediction compensation angle. According to the technical scheme, when the pan-tilt motor is controlled, the yaw angle prediction compensation value and the pitch angle prediction compensation value of the target to be detected, which are obtained by combining the three-dimensional coordinates of the target to be detected in the ground coordinate system and the gyroscope detection data, are referred, so that the response speed of the operation system and the control accuracy are improved.

Description

Visual control system and control method for holder
Technical Field
The invention relates to the technical field of machine vision, in particular to a holder vision control system and a control method.
Background
With the rapid development of target detection technology, particularly convolutional deep neural network, the visual system is more intelligent and robust, the task that the computer vision can solve is more complex and various, and the application field is wider. The visual system with the tracking function can be applied to the pan-tilt control of the mobile robot and is used for detecting a dynamic target. The vision tracking technology is mature step by step, gradually becomes the core technology of modern robots, and is applied to the fields of robot competition, picking systems, handheld cloud deck systems, unmanned aerial vehicles and the like.
In a traditional holder visual system, a visual tracking method only uses two-dimensional target detection, and takes the deviation between a target and a holder mechanism as a control numerical value or uses absolute coordinates as a control numerical value, and the two methods cannot have the characteristics of quick response and stability and cannot be well applied to variable-view-field random dynamic target tracking.
Disclosure of Invention
The invention aims to provide a visual control system and a control method of a holder, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
The technical scheme adopted for solving the technical problems is as follows:
a control method of a holder vision system comprises the following steps:
step 100, acquiring environmental information to obtain a visual image, and preprocessing the visual image;
step 200, identifying a target to be detected from the visual image;
step 300, establishing a camera coordinate system, and calculating the three-dimensional coordinate of the target to be measured in the camera coordinate system;
step 400, calculating a visual control angle according to the three-dimensional coordinate of the target to be detected in the camera coordinate system, wherein the visual control angle comprises a yaw angle and a pitch angle of the target to be detected relative to the camera coordinate system;
500, establishing a ground coordinate system, and calculating a three-dimensional coordinate of the target to be measured in the ground coordinate system;
step 600, calculating a prediction compensation angle according to the three-dimensional coordinate of the target to be measured in the ground coordinate system, wherein the prediction compensation angle comprises a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured;
and 700, controlling a holder motor by utilizing a fuzzy control algorithm according to the vision control angle and the prediction compensation angle.
As a further improvement of the above technical solution, in step 100, the preprocessing the visual image includes sequentially performing binarization processing, noise filtering processing, and opening operation processing on the visual image.
As a further improvement of the above technical solution, the step 200 includes performing an edge detection operation and a feature screening operation on the visual image to obtain a target to be detected, and outputting a projection coordinate of the target to be detected in the visual image.
As a further improvement of the above technical solution, the step 300 includes:
step 310, according to the projection coordinate of the target to be measured in the visual image, passing through a formula
Figure BDA0002395286020000021
Calculating a rotation matrix R3×3And an offset vector t3×1Where z represents the zoom size,
Figure BDA0002395286020000022
representing the projected coordinates of the object to be measured in the visual image,
Figure BDA0002395286020000023
representing an internal reference matrix belonging to the fixed parameters of the image acquisition device,
Figure BDA0002395286020000031
representing homogeneous coordinates in an object coordinate system;
step 320, according to the obtained rotation matrix R3×3And an offset vector t3×1By the formula
Figure BDA0002395286020000032
Calculating the three-dimensional coordinates of the target to be measured in the camera coordinate system
Figure BDA0002395286020000033
As a further improvement of the above technical solution, the step 400 includes passing through a formula
Figure BDA0002395286020000034
Calculating the pitch angle pitch of the target to be measured relative to the camera coordinate system through a formula
Figure BDA0002395286020000035
And calculating the yaw angle yaw of the target to be measured relative to the camera coordinate system.
As a further improvement of the above technical solution, the step 500 includes:
step 510, establishing a ground coordinate system, and reading a yaw angle numerical value and a pitch angle numerical value of the gyroscope;
step 520, according to the formula
Figure BDA0002395286020000036
Calculating the three-dimensional coordinates of the target to be measured in a ground coordinate system, wherein α represents the yaw angle value of the gyroscope, β represents the pitch angle value of the gyroscope,
Figure BDA0002395286020000037
and representing the three-dimensional coordinates of the target to be measured in the camera coordinate system.
As a further improvement of the above technical solution, step 600 includes:
610, obtaining a predicted coordinate of the target to be measured in a ground coordinate system by using least square method quadratic fitting;
step 620, calculating a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured, wherein the pitch angle prediction compensation value is
Figure BDA0002395286020000038
The predicted compensation value of the yaw angle is
Figure BDA0002395286020000041
Wherein SpRepresenting the pitch angle proportionality coefficient, SyRepresenting the yaw rate, PxAnd PyAnd representing the predicted coordinates of the target to be measured in the ground coordinate system in the step 610.
The invention also discloses a holder vision system, which comprises:
the image acquisition module is used for acquiring environmental information to obtain a visual image;
the preprocessing module is used for preprocessing the visual image;
the identification module is used for identifying a target to be detected from the visual image;
the first coordinate calculation module is used for establishing a camera coordinate system and calculating the three-dimensional coordinates of the target to be measured in the camera coordinate system;
the first angle calculation module is used for calculating a visual control angle according to the three-dimensional coordinate of the target to be detected in the camera coordinate system, wherein the visual control angle comprises a yaw angle and a pitch angle of the target to be detected relative to the camera coordinate system;
the second coordinate calculation module is used for establishing a ground coordinate system and calculating the three-dimensional coordinates of the target to be measured in the ground coordinate system;
the second angle calculation module is used for calculating a prediction compensation angle according to the three-dimensional coordinate of the target to be measured in the ground coordinate system, wherein the prediction compensation angle comprises a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured;
and the control module is used for controlling the holder motor by utilizing a fuzzy control algorithm according to the visual control angle and the prediction compensation angle.
The invention has the beneficial effects that: according to the technical scheme, when the pan-tilt motor is controlled, the yaw angle prediction compensation value and the pitch angle prediction compensation value of the target to be detected, which are obtained by combining the three-dimensional coordinates of the target to be detected in the ground coordinate system and gyroscope detection data, are referred, so that the response speed of an operation system and the control accuracy are improved.
Drawings
The invention is further described with reference to the accompanying drawings and examples;
FIG. 1 is a flow chart of a control method of the present invention.
Detailed Description
Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, if words such as "a plurality" are described, the meaning is one or more, the meaning of a plurality is two or more, more than, less than, more than, etc. are understood as excluding the present number, and more than, less than, etc. are understood as including the present number.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
Referring to fig. 1, the present application discloses a control method of a pan-tilt-zoom vision system, a first embodiment of which comprises the following steps:
step 100, acquiring environmental information to obtain a visual image, and preprocessing the visual image;
step 200, identifying a target to be detected from the visual image;
step 300, establishing a camera coordinate system, and calculating the three-dimensional coordinate of the target to be measured in the camera coordinate system;
step 400, calculating a visual control angle according to the three-dimensional coordinate of the target to be detected in the camera coordinate system, wherein the visual control angle comprises a yaw angle and a pitch angle of the target to be detected relative to the camera coordinate system;
500, establishing a ground coordinate system, and calculating a three-dimensional coordinate of the target to be measured in the ground coordinate system;
step 600, calculating a prediction compensation angle according to the three-dimensional coordinate of the target to be measured in the ground coordinate system, wherein the prediction compensation angle comprises a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured;
and 700, adding the vision control angle and the prediction compensation angle according to the vision control angle and the prediction compensation angle, and controlling a holder motor by using a fuzzy control algorithm according to the obtained data.
Specifically, in the embodiment, when the pan/tilt/zoom motor is controlled, a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured, which are obtained by combining a three-dimensional coordinate of the target to be measured in the ground coordinate system and gyroscope detection data, are referred to, so that the response speed of the operation system and the control accuracy are improved; in addition, the embodiment adopts the fuzzy control algorithm to carry out closed-loop control on the holder motor, so that the accuracy of controlling the holder motor can be further improved.
Further as a preferred implementation manner, in step 100 of this embodiment, the preprocessing the visual image includes sequentially performing binarization processing, noise filtering processing, and opening operation processing on the visual image.
Further as a preferred implementation manner, in this embodiment, the step 200 includes performing an edge detection operation and a feature screening operation on the visual image to obtain a target to be detected, and outputting a projection coordinate of the target to be detected in the visual image.
Further as a preferred implementation manner, in this embodiment, the step 300 includes:
step 310, according to the projection coordinate of the target to be measured in the visual image, passing through a formula
Figure BDA0002395286020000061
Calculating a rotation matrix R3×3And an offset vector t3×1Where z represents the zoom size,
Figure BDA0002395286020000062
representing the projected coordinates of the object to be measured in the visual image,
Figure BDA0002395286020000071
representing an internal reference matrix belonging to the fixed parameters of the image acquisition device,
Figure BDA0002395286020000072
representing homogeneous coordinates in an object coordinate system;
step 320, according to the obtained rotation matrix R3×3And an offset vector t3×1By the formula
Figure BDA0002395286020000073
Calculating the three-dimensional coordinates of the target to be measured in the camera coordinate system
Figure BDA0002395286020000074
Further as a preferred implementation, in this embodiment, the step 400 includes passing through a formula
Figure BDA0002395286020000075
Calculating the pitch angle pitch of the target to be measured relative to the camera coordinate system through a formula
Figure BDA0002395286020000076
And calculating the yaw angle yaw of the target to be measured relative to the camera coordinate system.
Further, in a preferred embodiment, in this embodiment, the step 500 includes:
step 510, establishing a ground coordinate system, and reading a yaw angle numerical value and a pitch angle numerical value of the gyroscope;
step 520, according to the formula
Figure BDA0002395286020000077
Calculating the three-dimensional coordinates of the target to be measured in a ground coordinate system, wherein α represents the yaw angle value of the gyroscope, β represents the pitch angle value of the gyroscope,
Figure BDA0002395286020000078
and representing the three-dimensional coordinates of the target to be measured in the camera coordinate system.
Further as a preferred implementation manner, in this embodiment, the step 600 includes:
610, obtaining a predicted coordinate of the target to be measured in a ground coordinate system by using least square method quadratic fitting;
step 620, calculating a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured, wherein the pitch angle prediction compensation value is
Figure BDA0002395286020000081
The predicted compensation value of the yaw angle is
Figure BDA0002395286020000082
Wherein SpRepresenting the pitch angle proportionality coefficient, SyRepresenting the yaw rate, PxAnd PyAnd representing the predicted coordinates of the target to be measured in the ground coordinate system in the step 610.
The application also discloses cloud platform visual system simultaneously, its first embodiment includes:
the image acquisition module is used for acquiring environmental information to obtain a visual image;
the preprocessing module is used for preprocessing the visual image;
the identification module is used for identifying a target to be detected from the visual image;
the first coordinate calculation module is used for establishing a camera coordinate system and calculating the three-dimensional coordinates of the target to be measured in the camera coordinate system;
the first angle calculation module is used for calculating a visual control angle according to the three-dimensional coordinate of the target to be detected in the camera coordinate system, wherein the visual control angle comprises a yaw angle and a pitch angle of the target to be detected relative to the camera coordinate system;
the second coordinate calculation module is used for establishing a ground coordinate system and calculating the three-dimensional coordinates of the target to be measured in the ground coordinate system;
the second angle calculation module is used for calculating a prediction compensation angle according to the three-dimensional coordinate of the target to be measured in the ground coordinate system, wherein the prediction compensation angle comprises a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured;
and the control module is used for controlling the holder motor by utilizing a fuzzy control algorithm according to the visual control angle and the prediction compensation angle.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that the present invention is not limited to the details of the embodiments shown and described, but is capable of numerous equivalents and substitutions without departing from the spirit of the invention as set forth in the claims appended hereto.

Claims (8)

1. A control method of a holder vision system is characterized in that: the method comprises the following steps:
step 100, acquiring environmental information to obtain a visual image, and preprocessing the visual image;
step 200, identifying a target to be detected from the visual image;
step 300, establishing a camera coordinate system, and calculating the three-dimensional coordinate of the target to be measured in the camera coordinate system;
step 400, calculating a visual control angle according to the three-dimensional coordinate of the target to be detected in the camera coordinate system, wherein the visual control angle comprises a yaw angle and a pitch angle of the target to be detected relative to the camera coordinate system;
500, establishing a ground coordinate system, and calculating a three-dimensional coordinate of the target to be measured in the ground coordinate system;
step 600, calculating a prediction compensation angle according to the three-dimensional coordinate of the target to be measured in the ground coordinate system, wherein the prediction compensation angle comprises a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured;
and 700, controlling a holder motor by utilizing a fuzzy control algorithm according to the vision control angle and the prediction compensation angle.
2. The method of claim 1, wherein the method comprises: in the step 100, the preprocessing the visual image includes sequentially performing binarization processing, noise filtering processing, and opening operation processing on the visual image.
3. A method of controlling a pan-tilt-zoom-vision system according to claim 2, characterized in that: the step 200 includes performing edge detection operation and feature screening operation on the visual image to obtain a target to be detected, and outputting a projection coordinate of the target to be detected in the visual image.
4. The method of claim 1, wherein the method comprises: the step 300 includes:
step 310, according to the projection coordinate of the target to be measured in the visual image, passing through a public keyFormula (II)
Figure FDA0002395286010000021
Calculating a rotation matrix R3×3And an offset vector t3×1Where z represents the zoom size,
Figure FDA0002395286010000022
representing the projected coordinates of the object to be measured in the visual image,
Figure FDA0002395286010000023
the reference matrix is represented as a function of the reference matrix,
Figure FDA0002395286010000024
representing homogeneous coordinates in an object coordinate system;
step 320, according to the obtained rotation matrix R3×3And an offset vector t3×1By the formula
Figure FDA0002395286010000025
Calculating the three-dimensional coordinates of the target to be measured in the camera coordinate system
Figure FDA0002395286010000026
5. The method of claim 4, wherein the method comprises: the step 400 includes passing through a formula
Figure FDA0002395286010000027
Calculating the pitch angle pitch of the target to be measured relative to the camera coordinate system through a formula
Figure FDA0002395286010000028
And calculating the yaw angle yaw of the target to be measured relative to the camera coordinate system.
6. The method of claim 5, wherein: the step 500 comprises:
step 510, establishing a ground coordinate system, and reading a yaw angle numerical value and a pitch angle numerical value of the gyroscope;
step 520, according to the formula
Figure FDA0002395286010000029
Calculating the three-dimensional coordinates of the target to be measured in a ground coordinate system, wherein α represents the yaw angle value of the gyroscope, β represents the pitch angle value of the gyroscope,
Figure FDA0002395286010000031
and representing the three-dimensional coordinates of the target to be measured in the camera coordinate system.
7. The method of claim 6, wherein: step 600 comprises:
610, obtaining a predicted coordinate of the target to be measured in a ground coordinate system by using least square method quadratic fitting;
step 620, calculating a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured, wherein the pitch angle prediction compensation value is
Figure FDA0002395286010000032
The predicted compensation value of the yaw angle is
Figure FDA0002395286010000033
Wherein SpRepresenting the pitch angle proportionality coefficient, SyRepresenting the yaw rate, PxAnd PyAnd representing the predicted coordinates of the target to be measured in the ground coordinate system in the step 610.
8. A cloud platform vision system which is characterized in that: the method comprises the following steps:
the image acquisition module is used for acquiring environmental information to obtain a visual image;
the preprocessing module is used for preprocessing the visual image;
the identification module is used for identifying a target to be detected from the visual image;
the first coordinate calculation module is used for establishing a camera coordinate system and calculating the three-dimensional coordinates of the target to be measured in the camera coordinate system;
the first angle calculation module is used for calculating a visual control angle according to the three-dimensional coordinate of the target to be detected in the camera coordinate system, wherein the visual control angle comprises a yaw angle and a pitch angle of the target to be detected relative to the camera coordinate system;
the second coordinate calculation module is used for establishing a ground coordinate system and calculating the three-dimensional coordinates of the target to be measured in the ground coordinate system;
the second angle calculation module is used for calculating a prediction compensation angle according to the three-dimensional coordinate of the target to be measured in the ground coordinate system, wherein the prediction compensation angle comprises a yaw angle prediction compensation value and a pitch angle prediction compensation value of the target to be measured;
and the control module is used for controlling the holder motor by utilizing a fuzzy control algorithm according to the visual control angle and the prediction compensation angle.
CN202010128998.9A 2020-02-28 2020-02-28 Cloud deck vision control system and control method Active CN111273701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010128998.9A CN111273701B (en) 2020-02-28 2020-02-28 Cloud deck vision control system and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010128998.9A CN111273701B (en) 2020-02-28 2020-02-28 Cloud deck vision control system and control method

Publications (2)

Publication Number Publication Date
CN111273701A true CN111273701A (en) 2020-06-12
CN111273701B CN111273701B (en) 2023-10-31

Family

ID=71000426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010128998.9A Active CN111273701B (en) 2020-02-28 2020-02-28 Cloud deck vision control system and control method

Country Status (1)

Country Link
CN (1) CN111273701B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949478A (en) * 2021-03-01 2021-06-11 浙江国自机器人技术股份有限公司 Target detection method based on holder camera
CN113489970A (en) * 2021-06-30 2021-10-08 浙江大华技术股份有限公司 Method and device for correcting pan-tilt camera, storage medium and electronic device
CN115474895A (en) * 2022-09-29 2022-12-16 山东探微医疗技术有限公司 OCT (optical coherence tomography) fundus imaging device and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080050042A1 (en) * 2006-05-31 2008-02-28 Zhang Guangjun Hardware-in-the-loop simulation system and method for computer vision
WO2018076977A1 (en) * 2016-10-27 2018-05-03 南京阿凡达机器人科技有限公司 Height measurement method based on monocular machine vision
CN109084724A (en) * 2018-07-06 2018-12-25 西安理工大学 A kind of deep learning barrier distance measuring method based on binocular vision
CN109389650A (en) * 2018-09-30 2019-02-26 京东方科技集团股份有限公司 A kind of scaling method of in-vehicle camera, device, vehicle and storage medium
CN109753076A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane vision tracing implementing method
CN110296691A (en) * 2019-06-28 2019-10-01 上海大学 Merge the binocular stereo vision measurement method and system of IMU calibration
US20190378294A1 (en) * 2017-02-23 2019-12-12 Hangzhou Hikvision Digital Technology Co., Ltd. Stereo camera and height acquisition method thereof and height acquisition system
CN110622091A (en) * 2018-03-28 2019-12-27 深圳市大疆创新科技有限公司 Cloud deck control method, device and system, computer storage medium and unmanned aerial vehicle
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080050042A1 (en) * 2006-05-31 2008-02-28 Zhang Guangjun Hardware-in-the-loop simulation system and method for computer vision
WO2018076977A1 (en) * 2016-10-27 2018-05-03 南京阿凡达机器人科技有限公司 Height measurement method based on monocular machine vision
US20190378294A1 (en) * 2017-02-23 2019-12-12 Hangzhou Hikvision Digital Technology Co., Ltd. Stereo camera and height acquisition method thereof and height acquisition system
CN109753076A (en) * 2017-11-03 2019-05-14 南京奇蛙智能科技有限公司 A kind of unmanned plane vision tracing implementing method
CN110622091A (en) * 2018-03-28 2019-12-27 深圳市大疆创新科技有限公司 Cloud deck control method, device and system, computer storage medium and unmanned aerial vehicle
CN109084724A (en) * 2018-07-06 2018-12-25 西安理工大学 A kind of deep learning barrier distance measuring method based on binocular vision
CN109389650A (en) * 2018-09-30 2019-02-26 京东方科技集团股份有限公司 A kind of scaling method of in-vehicle camera, device, vehicle and storage medium
CN110296691A (en) * 2019-06-28 2019-10-01 上海大学 Merge the binocular stereo vision measurement method and system of IMU calibration
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949478A (en) * 2021-03-01 2021-06-11 浙江国自机器人技术股份有限公司 Target detection method based on holder camera
CN113489970A (en) * 2021-06-30 2021-10-08 浙江大华技术股份有限公司 Method and device for correcting pan-tilt camera, storage medium and electronic device
CN115474895A (en) * 2022-09-29 2022-12-16 山东探微医疗技术有限公司 OCT (optical coherence tomography) fundus imaging device and method
CN115474895B (en) * 2022-09-29 2024-05-28 山东探微医疗技术有限公司 OCT fundus imaging device and method

Also Published As

Publication number Publication date
CN111273701B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN111932588B (en) Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN108932736B (en) Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
CN112734852B (en) Robot mapping method and device and computing equipment
CN109345593B (en) Camera posture detection method and device
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN111273701A (en) Visual control system and control method for holder
CN111598952B (en) Multi-scale cooperative target design and online detection identification method and system
CN112132874A (en) Calibration-board-free different-source image registration method and device, electronic equipment and storage medium
CN114022560A (en) Calibration method and related device and equipment
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
CN111964680A (en) Real-time positioning method of inspection robot
CN110992424A (en) Positioning method and system based on binocular vision
CN113838125A (en) Target position determining method and device, electronic equipment and storage medium
CN114972421A (en) Workshop material identification tracking and positioning method and system
CN110070581A (en) Double vision open country localization method, apparatus and system
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN113378701A (en) Ground multi-AGV state monitoring method based on unmanned aerial vehicle
CN112396634A (en) Moving object detection method, moving object detection device, vehicle and storage medium
CN111402324B (en) Target measurement method, electronic equipment and computer storage medium
CN116486290A (en) Unmanned aerial vehicle monitoring and tracking method and device, electronic equipment and storage medium
Duan et al. Image digital zoom based single target apriltag recognition algorithm in large scale changes on the distance
CN115861352A (en) Monocular vision, IMU and laser radar data fusion and edge extraction method
CN113610001B (en) Indoor mobile terminal positioning method based on combination of depth camera and IMU
US12002371B2 (en) Neuromorphic cameras for aircraft

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant