CN111768449A - Object grabbing method combining binocular vision with deep learning - Google Patents

Object grabbing method combining binocular vision with deep learning Download PDF

Info

Publication number
CN111768449A
CN111768449A CN201910254109.0A CN201910254109A CN111768449A CN 111768449 A CN111768449 A CN 111768449A CN 201910254109 A CN201910254109 A CN 201910254109A CN 111768449 A CN111768449 A CN 111768449A
Authority
CN
China
Prior art keywords
target
information
matching
image
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910254109.0A
Other languages
Chinese (zh)
Inventor
曾洪庆
钱超超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Vizum Intelligent Technology Co ltd
Original Assignee
Beijing Vizum Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Vizum Intelligent Technology Co ltd filed Critical Beijing Vizum Intelligent Technology Co ltd
Priority to CN201910254109.0A priority Critical patent/CN111768449A/en
Publication of CN111768449A publication Critical patent/CN111768449A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The invention discloses an object grabbing method combining binocular vision and deep learning, which comprises the following steps: acquiring binocular images; respectively carrying out target identification on the left image and the right image to obtain target area information; calculating a region characteristic value according to the region information of each target, and matching left and right targets; calculating the target pose by using the target area information and the matching relation of the left image and the right image; and the mechanical actuator performs grabbing. The invention combines the binocular vision of the self-adaptive deep learning algorithm model, utilizes the self-adaptive deep learning algorithm model to carry out feature matching, obtains more accurate matching features and matching relations, and further leads the calculation result of the binocular vision to be more accurate and stable, thereby improving the application efficiency and the reliability of the mechanical arm for positioning and grabbing the object.

Description

Object grabbing method combining binocular vision with deep learning
Technical Field
The invention belongs to the technical field of mechanical arm positioning and grabbing application, and particularly relates to an object grabbing method combining binocular vision and deep learning.
Background
The mechanical arm determines the application efficiency and reliability of the mechanical arm for positioning and grabbing the object, and based on the identification and positioning of the object by binocular stereo vision, the position information of the object can be quickly obtained, and the positioning and grabbing of the object by the mechanical arm are realized. Binocular stereoscopic vision is an important branch of computer vision, two cameras are used for shooting the same object from different positions to obtain two images, corresponding points in the two images are found out through a matching algorithm, parallax is obtained through calculation, and distance information of the object in the real world is recovered based on a triangulation principle. In actual use, each matching algorithm has poor extracted matching characteristics due to self defects, and the difficulty in extracting the matching characteristics is increased when the texture missing object is processed, so that the matching effect is not perfect.
The deep learning can utilize the useful characteristics extracted by the supervised training automatic learning, so that the characteristics can be more abstractly and highly expressed, and the distributed and parallel computing capability is the greatest advantage. The deep learning is applied to the matching process of binocular vision, the defects of common binocular vision are overcome, and the method has high practical value.
Disclosure of Invention
Aiming at the problems, the invention provides an object grabbing method combining binocular vision and deep learning. The technical scheme adopted by the invention is as follows:
an object grabbing method combining binocular vision and deep learning comprises the following steps: acquiring binocular images; respectively carrying out target identification on the left image and the right image to obtain target area information; calculating a region characteristic value according to the region information of each target, and matching left and right targets; calculating the target pose by using the target area information and the matching relation of the left image and the right image; and the mechanical actuator performs grabbing.
Further, the acquiring binocular images comprises: carrying out three-dimensional calibration on the binocular camera; respectively acquiring a left image and a right image of a target object through a left camera and a right camera of a binocular camera; and performing epipolar line correction on the left image and the right image to align the corrected left image and right image.
Further, the performing target identification on the left image and the right image respectively to obtain target area information includes: cutting the image size to a specified size; inputting the data into a self-adaptive deep learning algorithm for processing; and outputting the detection result as the basis of subsequent matching.
Further, the adaptive deep learning algorithm is based on a classic target detection algorithm SSD, and on an original algorithm CONV4_3 layer, a FPN algorithm idea is utilized to perform up-sampling on multi-level Feature Maps so as to improve the small target detection precision.
Further, the calculating a region feature value according to the information of each target region and performing matching of the left target and the right target includes: calculating a reference anchor point according to the regional information of the left image and the right image; calculating the characteristic information P of each block of regional information according to the anchor points; and matching the left target and the right target.
Further, the calculating the reference anchor point according to the region information of the left and right images includes: the calculation of the anchor point is completed by the size and the center point of each area, and the specific method is as follows:
Figure BDA0002013208160000021
where Qi is the target region size and Ki is the target region center.
Further, the calculating the feature information P of each block of region information according to the anchor point includes: from anchor point information (x)0,y0) And region information (x, y, w, h, t), calculating coordinate offset information (x-x)0,y-y0) And region information (w x h, t) which together form feature information P (x-x)0,y-y0,w*h,t)。
Further, the left and right target matching comprises: and (3) regarding the feature information P as four-dimensional vectors, respectively multiplying the four-dimensional vectors by corresponding weights, then calculating the Euclidean distance between the two vectors to be regarded as the final difference degree of the vectors, and obtaining a matching combination by using a WTA (winner Take ALL) algorithm according to the difference degree.
The invention has the beneficial effects that: the binocular vision of the self-adaptive deep learning algorithm model is combined, the self-adaptive deep learning algorithm model is used for feature matching, more accurate matching features and matching relations are obtained, the binocular vision calculation result is more accurate and stable, and therefore the application efficiency and reliability of the mechanical arm in positioning and grabbing the object are improved.
Drawings
Fig. 1 is a schematic flow chart of an object grabbing method combining binocular vision and deep learning according to the invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Referring to fig. 1, the embodiment of the present invention specifically includes the following steps:
(1) and carrying out three-dimensional calibration on the binocular camera.
The method specifically comprises the following steps: respectively calibrating a left camera and a right camera of a binocular camera to obtain an internal reference matrix A of the binocular camera and a rotation matrix R of the left camera1And a rotation matrix R of the right camera2And the translation vector T of the left camera1And a translation vector T of the right camera2(ii) a Calculating a rotation matrix R and a translation vector T between the left camera and the right camera according to the following formula:
Figure BDA0002013208160000031
(2) and respectively acquiring a left image and a right image of the target object through a left camera and a right camera of the binocular camera.
(3) And performing epipolar line correction on the left image and the right image to align the corrected left image and right image.
The method specifically comprises the following steps: decomposing the rotation matrix R into two rotation matrices R1And r2Wherein r is1And r2The method comprises the steps that the left camera and the right camera are rotated by half respectively to enable the optical axes of the left camera and the right camera to be parallel;
aligning the left image and the right image is achieved by:
Figure BDA0002013208160000032
wherein R isrectRotation matrix to align rows:
Figure BDA0002013208160000041
rotation matrix RrectBy pole e1Starting the direction, mainly using the original point of the left image, and taking the direction from the left camera to the translation vector of the right camera as a main point direction:
Figure BDA0002013208160000042
e1and e2Is orthogonal to e1Normalized to unit vector:
Figure BDA0002013208160000043
wherein, TxIs the component of the translation vector T in the horizontal direction in the plane of the binocular camera, TyThe component of the translation vector T in the vertical direction in the plane where the binocular camera is located is taken as the translation vector T;
e3and e1And e2Orthogonal, e3Calculated by the following formula:
e3=e2×e1
according to the physical significance of the rotation matrix, the method comprises the following steps:
Figure BDA0002013208160000044
wherein α represents the angle of rotation of the left and right cameras in the plane of the left and right cameras, 0- α -180 DEG, and the left camera is aligned around e3Direction rotation α', for the right camera, around e3The direction is rotated α ".
(4) And respectively carrying out target identification on the left image and the right image to obtain target area information.
The method specifically comprises the following steps: cutting the image size to 300 multiplied by 300 mm; inputting the cut image into a self-adaptive deep learning algorithm for processing; and outputting the detection result as the basis of subsequent matching.
(5) And calculating the reference anchor point according to the information of each target area.
The calculation of the anchor point is completed by the size and the center point of each area, and the specific method is as follows:
Figure BDA0002013208160000045
where Qi is the target region size and Ki is the target region center.
(6) And calculating the characteristic information P of each block of region information according to the anchor point.
The method specifically comprises the following steps: from anchor point information (x)0,y0) And region information (x, y, w, h, t), calculating coordinate offset information (x-x)0,y-y0) And region information (w x h, t) which together form feature information P (x-x)0,y-y0,w*h,t)。
(7) And performing left-right matching according to the obtained characteristic information P.
The method specifically comprises the following steps: and (3) regarding the feature information P as four-dimensional vectors, respectively multiplying the four-dimensional vectors by corresponding weights, then calculating the Euclidean distance between the two vectors to be regarded as the final difference degree of the vectors, and obtaining a matching combination by using a WTA (winner Take ALL) algorithm according to the difference degree.
(8) And calculating the three-dimensional coordinates of the characteristic points according to the binocular stereoscopic vision principle by using the obtained matching relationship. The method specifically comprises the following steps:
let the left camera O-xyz be located at the origin of the world coordinate system and no rotation occurs, and the image coordinate system is Ol-X1Y1Effective focal length of fl(ii) a Coordinate system of right camera OrXyz, image coordinate system Or-XrYrEffective focal length of fr. Then we can get the following relation from the projection model of the camera:
Figure BDA0002013208160000051
because of the O-xyz coordinate system and Or-xryrzrThe positional relationship between the coordinate systems may be transformed by a spatial transformation matrix MLrExpressed as:
Figure BDA0002013208160000052
similarly, for spatial points in the O-xyz coordinate system, the correspondence between two camera face points can be expressed as:
Figure BDA0002013208160000061
the spatial point three-dimensional coordinates can then be expressed as:
Figure BDA0002013208160000062
therefore, the left and right computer internal parameters/focal length f can be obtained by the computer calibration technologyr,flAnd the image coordinates of the space points in the left camera and the right camera can reconstruct the three-dimensional space coordinates of the measured point.
(9) And the mechanical executing mechanism determines the position of the object according to the acquired three-dimensional coordinates and captures the object.

Claims (8)

1. An object grabbing method combining binocular vision and deep learning is characterized by comprising the following steps: acquiring binocular images; respectively carrying out target identification on the left image and the right image to obtain target area information; calculating a region characteristic value according to the region information of each target, and matching left and right targets; calculating the target pose by using the target area information and the matching relation of the left image and the right image; and the mechanical actuator performs grabbing.
2. The binocular vision combined deep learning object grabbing method according to claim 1, wherein the acquiring of binocular images comprises: carrying out three-dimensional calibration on the binocular camera; respectively acquiring a left image and a right image of a target object through a left camera and a right camera of a binocular camera; and performing epipolar line correction on the left image and the right image to align the corrected left image and right image.
3. The binocular vision combined with deep learning object capture method according to claim 1, wherein the performing of the target recognition on the left and right images respectively to obtain the target area information comprises: cutting the image size to a specified size; inputting the data into a self-adaptive deep learning algorithm for processing; and outputting the detection result as the basis of subsequent matching.
4. The adaptive deep learning algorithm according to claim 3, wherein the adaptive deep learning algorithm is based on a classic target detection algorithm SSD, and at the original algorithm CONV4_3 level, a multi-level Feature Maps is up-sampled by using an FPN algorithm idea to improve the small target detection accuracy.
5. The binocular vision combined with deep learning object capturing method according to claim 1, wherein the calculating of the region feature values according to the region information of each object and the matching of the left and right objects comprises: calculating a reference anchor point according to the regional information of the left image and the right image; calculating the characteristic information P of each block of regional information according to the anchor points; and matching the left target and the right target.
6. The method of claim 5, wherein the anchor point is calculated according to the area information of the left and right images, and the calculation is performed according to the size of each area and the center point thereof, as follows:
Figure FDA0002013208150000011
where Qi is the target region size and Ki is the target region center.
7. The method of claim 5, wherein the computing of the feature information P of each block of region information according to the anchor point comprises: from anchor point information (x)0,y0) And regional informationInformation (x, y, w, h, t), coordinate offset information (x-x) is calculated0,y-y0) And region information (w x h, t) which together form feature information P (x-x)0,y-y0,w*h,t)。
8. The left-right object matching according to claim 5, comprising: and (3) regarding the feature information P as four-dimensional vectors, respectively multiplying the four-dimensional vectors by corresponding weights, then calculating the Euclidean distance between the two vectors to be regarded as the final difference degree of the vectors, and obtaining a matching combination by using a WTA (winner Take ALL) algorithm according to the difference degree.
CN201910254109.0A 2019-03-30 2019-03-30 Object grabbing method combining binocular vision with deep learning Pending CN111768449A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910254109.0A CN111768449A (en) 2019-03-30 2019-03-30 Object grabbing method combining binocular vision with deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910254109.0A CN111768449A (en) 2019-03-30 2019-03-30 Object grabbing method combining binocular vision with deep learning

Publications (1)

Publication Number Publication Date
CN111768449A true CN111768449A (en) 2020-10-13

Family

ID=72718687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910254109.0A Pending CN111768449A (en) 2019-03-30 2019-03-30 Object grabbing method combining binocular vision with deep learning

Country Status (1)

Country Link
CN (1) CN111768449A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393524A (en) * 2021-06-18 2021-09-14 常州大学 Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113689422A (en) * 2021-09-08 2021-11-23 理光软件研究所(北京)有限公司 Image processing method and device and electronic equipment
CN113689326A (en) * 2021-08-06 2021-11-23 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN113808197A (en) * 2021-09-17 2021-12-17 山西大学 Automatic workpiece grabbing system and method based on machine learning
CN117409340A (en) * 2023-12-14 2024-01-16 上海海事大学 Unmanned aerial vehicle cluster multi-view fusion aerial photography port monitoring method, system and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015042460A1 (en) * 2013-09-20 2015-03-26 Camplex, Inc. Surgical visualization systems and displays
CN107192331A (en) * 2017-06-20 2017-09-22 佛山市南海区广工大数控装备协同创新研究院 A kind of workpiece grabbing method based on binocular vision
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
CN108076338A (en) * 2016-11-14 2018-05-25 北京三星通信技术研究有限公司 Image vision processing method, device and equipment
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108229456A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Method for tracking target and device, electronic equipment, computer storage media
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN108647573A (en) * 2018-04-04 2018-10-12 杭州电子科技大学 A kind of military target recognition methods based on deep learning
CN108656107A (en) * 2018-04-04 2018-10-16 北京航空航天大学 A kind of mechanical arm grasping system and method based on image procossing
CN108876855A (en) * 2018-05-28 2018-11-23 哈尔滨工程大学 A kind of sea cucumber detection and binocular visual positioning method based on deep learning
CN109034018A (en) * 2018-07-12 2018-12-18 北京航空航天大学 A kind of low latitude small drone method for barrier perception based on binocular vision
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015042460A1 (en) * 2013-09-20 2015-03-26 Camplex, Inc. Surgical visualization systems and displays
CN108076338A (en) * 2016-11-14 2018-05-25 北京三星通信技术研究有限公司 Image vision processing method, device and equipment
CN107192331A (en) * 2017-06-20 2017-09-22 佛山市南海区广工大数控装备协同创新研究院 A kind of workpiece grabbing method based on binocular vision
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
CN108229456A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Method for tracking target and device, electronic equipment, computer storage media
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN108647573A (en) * 2018-04-04 2018-10-12 杭州电子科技大学 A kind of military target recognition methods based on deep learning
CN108656107A (en) * 2018-04-04 2018-10-16 北京航空航天大学 A kind of mechanical arm grasping system and method based on image procossing
CN108876855A (en) * 2018-05-28 2018-11-23 哈尔滨工程大学 A kind of sea cucumber detection and binocular visual positioning method based on deep learning
CN109034018A (en) * 2018-07-12 2018-12-18 北京航空航天大学 A kind of low latitude small drone method for barrier perception based on binocular vision
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
SHUXIN LI 等: "Multiscale Rotated Bounding Box-Based Deep Learning Method for Detecting Ship Targets in Remote Sensing Images", 《SENSORS》, vol. 18, no. 08, 17 August 2018 (2018-08-17), pages 1 - 14 *
原彬理: "基于双目立体视觉的普通工件图像匹配与定位", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, 15 March 2019 (2019-03-15), pages 138 - 805 *
徐凯: "基于双目视觉的机械手定位抓取技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2018, 15 June 2018 (2018-06-15), pages 138 - 1247 *
李传朋: "基于机器视觉和深度学习的目标识别与抓取定位研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2017, pages 138 - 356 *
马利 等: "适于硬件实现的自适应权重立体匹配算法", 《系统仿真学报》, vol. 26, no. 09, pages 2079 - 2084 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393524A (en) * 2021-06-18 2021-09-14 常州大学 Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113393524B (en) * 2021-06-18 2023-09-26 常州大学 Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113689326A (en) * 2021-08-06 2021-11-23 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN113689326B (en) * 2021-08-06 2023-08-04 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN113689422A (en) * 2021-09-08 2021-11-23 理光软件研究所(北京)有限公司 Image processing method and device and electronic equipment
CN113808197A (en) * 2021-09-17 2021-12-17 山西大学 Automatic workpiece grabbing system and method based on machine learning
CN117409340A (en) * 2023-12-14 2024-01-16 上海海事大学 Unmanned aerial vehicle cluster multi-view fusion aerial photography port monitoring method, system and medium
CN117409340B (en) * 2023-12-14 2024-03-22 上海海事大学 Unmanned aerial vehicle cluster multi-view fusion aerial photography port monitoring method, system and medium

Similar Documents

Publication Publication Date Title
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
CN111768449A (en) Object grabbing method combining binocular vision with deep learning
CN109272570B (en) Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
CN104463108B (en) A kind of monocular real time target recognitio and pose measuring method
CN106570913B (en) monocular SLAM rapid initialization method based on characteristics
CN109767474B (en) Multi-view camera calibration method and device and storage medium
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN111897349B (en) Autonomous obstacle avoidance method for underwater robot based on binocular vision
CN107843251B (en) Pose estimation method of mobile robot
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN110555878B (en) Method and device for determining object space position form, storage medium and robot
TWI709062B (en) Virtuality reality overlapping method and system
CN112184811B (en) Monocular space structured light system structure calibration method and device
WO2020063058A1 (en) Calibration method for multi-degree-of-freedom movable vision system
CN112734863A (en) Crossed binocular camera calibration method based on automatic positioning
WO2022040983A1 (en) Real-time registration method based on projection marking of cad model and machine vision
CN110490943A (en) Quick method for precisely marking, system and the storage medium of 4D holography capture system
CN111213159A (en) Image processing method, device and system
CN110827321A (en) Multi-camera cooperative active target tracking method based on three-dimensional information
CN104346614A (en) Watermelon image processing and positioning method under real scene
CN113240749B (en) Remote binocular calibration and ranging method for recovery of unmanned aerial vehicle facing offshore ship platform
CN115345942A (en) Space calibration method and device, computer equipment and storage medium
Wang et al. Automatic measurement based on stereo vision system using a single PTZ camera
KR102107465B1 (en) System and method for generating epipolar images by using direction cosine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination