CN110889829B - Monocular distance measurement method based on fish eye lens - Google Patents

Monocular distance measurement method based on fish eye lens Download PDF

Info

Publication number
CN110889829B
CN110889829B CN201911090732.3A CN201911090732A CN110889829B CN 110889829 B CN110889829 B CN 110889829B CN 201911090732 A CN201911090732 A CN 201911090732A CN 110889829 B CN110889829 B CN 110889829B
Authority
CN
China
Prior art keywords
image
target object
contour
feature points
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911090732.3A
Other languages
Chinese (zh)
Other versions
CN110889829A (en
Inventor
左伟
李晓丽
柯天成
宋奇奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201911090732.3A priority Critical patent/CN110889829B/en
Publication of CN110889829A publication Critical patent/CN110889829A/en
Application granted granted Critical
Publication of CN110889829B publication Critical patent/CN110889829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a target object ranging method based on a monocular fisheye camera, which comprises the following steps: 1) Taking a picture by using a fish-eye lens to obtain a distorted image; 2) Sending the distorted image into a trained neural network to obtain external frame coordinates of a target object, and framing the target object in the image according to the obtained block frame coordinates; 3) Performing image processing on the framed image area to obtain a contour map of the target object; 4) Using a feature point detection algorithm to the profile map to obtain target feature points; 5) Obtaining corrected feature point coordinates according to a distorted image correction formula; 6) And establishing a mathematical equation between the image and a world coordinate system by using a coordinate conversion formula according to the known distance between the feature points of the object in reality corresponding to the feature points in the image, and solving to obtain the distance between the fisheye camera and the target object. Compared with the traditional method, the method has the advantages of low cost, large identification range, high detection speed and high accuracy.

Description

Monocular distance measurement method based on fish eye lens
Technical Field
The invention relates to a monocular distance measuring method based on a fisheye lens, and belongs to the field of optics and computer vision.
Background
In recent years, visual sensors have attracted a great deal of attention, and ranging methods based on visual sensors have also become research hotspots because they collect a wide range of environmental information, and are inexpensive and easy to use. The visual ranging method is mainly classified into three kinds of monocular ranging, binocular ranging and multi-ocular ranging according to the number of the visual sensors used.
At present, laser radar is generally required for acquiring high-precision depth information from a target, but due to the high price, the laser radar is mainly in a technical research and testing stage and has a certain distance from large-scale marketing application. In addition, with the rapid development of artificial intelligence in recent years, vision has gradually become the focus of research, but some drawbacks have also been found, such as the use of binocular ranging techniques limited to datum lines, which lead to insufficient coordination between the size of the device and the loading capacity of the traffic platform; the depth estimation range based on RGB-D is short, is difficult to be applied to practice, is greatly affected by environmental space change, and has unsatisfactory outdoor performance; the distance measuring method based on the common pinhole camera cannot obtain as much visual information as possible because the visual range of the camera is small, and the recognition efficiency is low.
The fisheye camera has the advantages of low price and small volume, has an ultra-large visual angle which can reach 180 degrees or more, can reach 2-3 common camera shooting ranges only by one, and has rich information quantity of the shot pictures, so that the defects of the sensor can be effectively overcome. Therefore, the use of a monocular fisheye camera to obtain depth information has become one of the focus of research in the field of computer vision.
Disclosure of Invention
The invention aims to provide a monocular distance measuring method based on a fisheye lens.
In order to solve the technical problems, the technical scheme of the invention is to provide a monocular ranging method based on a fisheye lens, which comprises the following steps:
1) Taking a picture by using a fish-eye lens to obtain a distorted image;
2) Sending the distorted image into a trained neural network to obtain external frame coordinates of a target object, and framing the target object in the image according to the obtained block frame coordinates;
3) Performing image processing on the framed image area to obtain a contour map of the target object;
4) Using a feature point detection algorithm to the profile map to obtain target feature points;
5) Obtaining corrected feature point coordinates according to a distorted image correction formula;
6) And establishing a mathematical equation between the image and a world coordinate system by using a coordinate conversion formula according to the known distance between the feature points of the object in reality corresponding to the feature points in the image, and solving to obtain the distance between the fisheye camera and the target object.
In the step 1), the angle of view of the fisheye lens is 180 degrees, and the center of the target object and the optical center of the fisheye lens are in the same horizontal plane.
In said step 2), the algorithm used for the distorted image is a deep learning based object detection algorithm comprising the following parts:
a) Extracting advanced features of distorted images in the training data set by adopting a multichannel convolution method in MobileNet V2;
b) Performing feature fusion on features screened by different convolution layers by using FPN, and sending the fused features into a classification sub-network and a positioning sub-network to obtain classification and positioning errors;
c) Deep neural network learning training is carried out by adopting a loss function, and a trained optimization model is obtained, wherein the loss function is as follows:
wherein x is an input image, θ is a model parameter, m is a preset frame number, α and β are weights for balance positioning and classification loss,when the preset frame is positive, the preset frame value is 1, otherwise, the preset frame value is 0,l i And p i Respectively position offset and label; l (L) reg Representing a position loss function, L cls Representing a classification loss function.
In said step 3), the contour recognition of the image comprises the steps of:
a) Reading the image to be detected, and carrying out gray processing on the image, namely
b) Setting a threshold value to carry out binarization processing on the gray level map;
c) Detecting the contour of the gray level map by using a Canny operator, storing the contour point into an array after obtaining the contour point, and adopting contour tracking to the contour to obtain a continuous contour point set P (i);
d) Contours were drawn in black and white of the original size using OpenCV.
In the step 4), the feature point detection method based on the contour mainly solves the feature point through an indirect method. Firstly, decomposing a target object contour point set P (i) to X, Y coordinate axes to obtain two one-dimensional discrete curves X (i) and Y (i), and then solving the concave rate of each point of the curves X (i) and Y (i) by an interpolation method:
wherein L is the interpolation step length; after the concave rate is obtained, the detection precision and the robustness of the algorithm to noise are improved by utilizing multiple scales under the small scale, and meanwhile, the characteristic points of the target object can be obtained by utilizing the self-adaptive threshold value.
In the step 5), the correction of the distorted image is required to adopt a distortion coefficient matrix K= [ K ] obtained by a Zhang Zhengyou calibration method 1 ,K 2 ,...,K 5 ];
The Zhang Zhengyou calibration method needs to use a printed checkerboard (the black-white interval is known), attach the checkerboard to a flat plate, then shoot a plurality of pictures (10-20 pictures) aiming at the checkerboard, then detect characteristic points in the pictures by utilizing Harris characteristics, and finally calculate internal parameters and distortion coefficients of the fish-eye lens by an analytical solution estimation method.
In the step 6), the conversion expression between the pixel coordinate system and the world coordinate system is:
wherein, (mu, v) is the coordinate of the pixel coordinate system, dx and dy are the sizes of the image unit pixels in the horizontal and vertical directions respectively, and (c) x ,c y ) F is the focal length of the fisheye lens, R is the rotation matrix, t is the transfer vector,is the coordinates of the world coordinate system.
Compared with the prior art, the invention has the following advantages:
1. the cost is low: the invention can finish the whole ranging process by only using one monocular camera and one embedded device at the lowest;
2. the identification range is large: compared with the traditional algorithm, the invention adopts the wide-angle fisheye lens, and has wider detection range and higher efficiency under the same number of lenses;
3. the detection speed is high, and the accuracy is high: compared with the traditional target detection algorithm, the method provided by the invention is faster and more accurate by using the light-weight deep neural network; according to the method, the characteristic points of the target object are solved by using an indirect method, and the characteristic points can be recognized more rapidly under the condition of ensuring the accuracy.
Drawings
FIG. 1 is a flow chart of a monocular ranging method based on a fisheye lens of the present invention;
FIG. 2 is a diagram of a target detection network according to the present invention;
FIG. 3 is a schematic diagram of an imaging of a fisheye lens of the invention;
fig. 4 is a schematic diagram of ranging for establishing mathematical equations between coordinate systems in the present invention.
Detailed Description
In order to more clearly illustrate the advantages of the present invention and the implementation of the embodiments, the present invention will be further described below with reference to specific examples and drawings. It should be understood that the following examples are not intended to limit the practice of the invention, but are merely illustrative of the invention.
The invention provides a target detection and ranging method based on a monocular fisheye lens, which collects images by taking the monocular fisheye lens as a detection sensor, wherein the specific implementation flow is shown in figure 1, and the method comprises the following steps:
taking a picture by using a fish-eye lens to obtain a distorted image:
the angle of view of the fish-eye lens is 180 degrees, and the center of the target object and the optical center of the fish-eye lens are in the same horizontal plane. Depending on the application, it is necessary to take photographs using fish-eye lenses at three locations:
1) 10-20 black-and-white chessboard pictures with the interval of 20mm are shot by using a fisheye lens so as to be calibrated by using a Zhang Zhengyou calibration method, and the distortion coefficient K= [ K ] of the fisheye lens is obtained 1 ,k 2 ,k 3 ,k 4 ,k 5 ]And internal parameters [ dx, dy, c x ,c y ];
2) The optical center of the fish-eye lens and the center of the target object are positioned at the same height, and then about 1000 photos are shot from different scenes with different angles and different distances, so that the training of the image detection depth network model is performed. The deep network model structure is shown in fig. 2:
the deep convolution part adopts a multichannel convolution method in the MobileNet V2 and is used for extracting advanced features of the distorted image in the training data set, namely a feature map. Compared with the traditional convolution, the method separates the region and the channel of the image, reduces the parameter quantity and has higher calculation speed;
performing feature fusion on feature graphs screened by different convolution layers by using FPN, and sending the fused features into a classification sub-network and a positioning sub-network to obtain classification and positioning errors;
deep neural network learning training is performed by adopting a loss function, wherein the loss function is as follows:
wherein x is an input image, θ is a model parameter, m is a preset frame number, α and β are weights for balance positioning and classification loss,when the preset frame is positive, the preset frame value is 1, otherwise, the preset frame value is 0,l i And p i Respectively position offset and label; l (L) reg Representing a position loss function, L cls Representing a classification loss function.
3) A picture is directly shot on the target, and the method is used for measuring the distance of the target.
Step two, sending the distorted image into a trained neural network to obtain the external frame coordinates of the target object, and according to the obtained block coordinates:
the block coordinates are expressed in the form of (x, y, w, h), wherein (x, y) is the top left corner vertex coordinates of the circumscribed frame, and (w, h) is the width and height of the circumscribed frame, so that the target object can be framed in the image.
Thirdly, performing image processing on the framed image area to obtain a contour map of the target object:
1) Reading the image to be detected, and carrying out gray processing on the image, namely
2) Setting a threshold value to carry out binarization processing on the gray level map;
3) Setting the pixel value of the outer area of the block diagram to 0 according to the block diagram coordinates obtained in the step two, then using a Canny operator to detect the contour on the gray level map, obtaining contour points, storing the contour points into an array, and adopting contour tracking on the contour to obtain a continuous contour point set P (i);
4) Contours were drawn in black and white of the original size using OpenCV.
Step four, using a feature point detection algorithm to the contour map to obtain target feature points:
the feature point detection method based on the outline mainly solves feature points through an indirect method. Firstly, decomposing a target object contour point set P (i) to X, Y coordinate axes to obtain two one-dimensional discrete curves X (i) and Y (i), and then solving the concave rate of each point of the curves X (i) and Y (i) by an interpolation method:
wherein L is the interpolation step length; after the concave rate is obtained, the detection precision and the robustness of the algorithm to noise are improved by utilizing multiple scales under the small scale, and meanwhile, the characteristic points of the target object can be obtained by utilizing the self-adaptive threshold value.
Fifthly, obtaining corrected feature point coordinates according to a distorted image correction formula:
the principle of fisheye lens imaging is shown in figure 3. In reality, a target P is imaged on a P 'point of an image plane through a fisheye lens optical center, the length of O' P 'is set as r', and the target P is combined with a distortion coefficient K and is formed by:
r′(θ)=k 1 θ+k 2 θ 3 +…+k 5 θ 9
the angle of incidence θ is available, so Op length r=ftan θ. The p ' point coordinates (x ', y ') are also known, so thatThe size, and then the normal point p coordinate under the pinhole model is obtained:
step six, establishing a mathematical equation between the image and a world coordinate system by using a coordinate conversion formula according to the known distance between the feature points of the object in reality corresponding to the feature points in the image, and solving to obtain the distance between the fisheye camera and the target object:
the conversion expression between the pixel coordinate system and the world coordinate system is known as:
assuming that the world coordinate system is located at the position shown in fig. 4, r=i, t= [0 d] T The above formula can be:
d can be obtained by integrating the coordinates of the known characteristic points and the length between the coordinates.
Selecting the midpoint of the two characteristic points as the mass center of the target, and then the distance from the mass center to the origin point in the world coordinate systemThe distance of the camera's optical centre to the centre of mass of the object +.>

Claims (7)

1. The monocular distance measurement method based on the fish eye lens is characterized by comprising the following steps of:
taking a picture by using a fisheye lens to obtain a distorted image;
step two, sending the distorted image into a trained neural network to obtain the external frame coordinates of the target object, and framing the target object in the image according to the obtained block frame coordinates;
in the second step, the algorithm used for the distorted image is a target detection algorithm based on deep learning, and the algorithm comprises the following parts:
extracting advanced features of distorted images in the training data set by adopting a multichannel convolution method in MobileNet V2;
performing feature fusion on features screened by different convolution layers by using FPN, and sending the fused features into a classification sub-network and a positioning sub-network to obtain classification and positioning errors;
deep neural network learning training is carried out by adopting a loss optimization function, and a trained optimization model is obtained, wherein the loss function is as follows:
wherein x is an input image, θ is a model parameter, m is a preset frame number, α and β are weights for balance positioning and classification loss,when the preset frame is positive, the preset frame value is 1, otherwise, the preset frame value is 0,l i And p i Respectively position offset and label; l (L) reg Representing a position loss function, L cls Representing a classification loss function;
thirdly, performing image processing on the framed image area to obtain a contour map of the target object;
step four, using a feature point detection algorithm to the profile map to obtain target feature points;
fifthly, obtaining corrected feature point coordinates according to a distorted image correction formula;
and step six, establishing a mathematical equation between the image and a world coordinate system by using a coordinate conversion formula according to the known distance between the feature points of the object in reality corresponding to the feature points in the image, and solving to obtain the distance between the fisheye camera and the target object.
2. The method of claim 1, wherein in the first step, the angle of view of the fisheye lens is 180 degrees, and the center of the target object is in the same horizontal plane as the optical center of the fisheye lens.
3. The method of monocular distance measurement based on fish-eye lens according to claim 1, wherein in the third step, the contour recognition of the image comprises the steps of:
reading the image to be detected, and carrying out gray processing on the image, namely
Setting a threshold value to carry out binarization processing on the gray level map;
detecting the contour of the gray level map by using a Canny operator, storing the contour point into an array after obtaining the contour point, and adopting contour tracking to the contour to obtain a continuous contour point set P (i);
contours were drawn in black and white of the original size using OpenCV.
4. The method for monocular distance measurement based on fish-eye lens according to claim 1, wherein in the fourth step, the contour-based feature point detection method solves the feature points by an indirect method, and comprises the following steps:
firstly, decomposing a target object contour point set P (i) to X, Y coordinate axes to obtain two one-dimensional discrete curves X (i) and Y (i), and then solving the curvatures of each point of the curves X (i) and Y (i) by an interpolation method:
wherein L is the interpolation step length; after the curvature is obtained, the multi-scale is utilized to improve the detection precision and the robustness of the algorithm to noise under the small scale, and meanwhile, the self-adaptive threshold value is utilized to obtain the characteristic point of the target object.
5. The method of monocular distance measurement based on fish-eye lens according to claim 1, wherein in the fifth step, the distortion image is corrected by using a distortion coefficient matrix k= [ K ] obtained by a Zhang Zhengyou calibration method 1 ,K 2 ,...,K 5 ];
Zhang Zhengyou calibration method: a printed checkerboard (with known black and white spacing) is used, the checkerboard is attached to a flat plate, then 10-20 pictures are shot for the checkerboard, characteristic points are detected in the pictures by utilizing Harris characteristics, and finally internal parameters and distortion coefficients of the fisheye lens are calculated through an analytical solution estimation method.
6. The method of claim 1, wherein in the sixth step, a conversion expression between a pixel coordinate system and a world coordinate system is:
wherein, (mu, v) is the coordinate of the pixel coordinate system, dx and dy are the sizes of the image unit pixels in the horizontal and vertical directions respectively, and (c) x ,c y ) F is the focal length of the fisheye lens, R is the rotation matrix, t is the transfer vector,is the coordinates of the world coordinate system.
7. The method of claim 2, wherein the target object is a cube.
CN201911090732.3A 2019-11-09 2019-11-09 Monocular distance measurement method based on fish eye lens Active CN110889829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911090732.3A CN110889829B (en) 2019-11-09 2019-11-09 Monocular distance measurement method based on fish eye lens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911090732.3A CN110889829B (en) 2019-11-09 2019-11-09 Monocular distance measurement method based on fish eye lens

Publications (2)

Publication Number Publication Date
CN110889829A CN110889829A (en) 2020-03-17
CN110889829B true CN110889829B (en) 2023-11-03

Family

ID=69747179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911090732.3A Active CN110889829B (en) 2019-11-09 2019-11-09 Monocular distance measurement method based on fish eye lens

Country Status (1)

Country Link
CN (1) CN110889829B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859263B (en) * 2019-01-26 2023-06-27 中北大学 Wide-view angle positioning method based on fisheye lens
CN111429533B (en) * 2020-06-15 2020-11-13 上海海栎创微电子有限公司 Camera lens distortion parameter estimation device and method
CN111780673B (en) * 2020-06-17 2022-05-31 杭州海康威视数字技术股份有限公司 Distance measurement method, device and equipment
CN112308927A (en) * 2020-10-26 2021-02-02 南昌智能新能源汽车研究院 Fusion device of panoramic camera and laser radar and calibration method thereof
CN112525162B (en) * 2021-02-09 2021-07-13 国网江苏省电力有限公司泰州供电分公司 System and method for measuring image distance of power transmission line by unmanned aerial vehicle
CN113421300B (en) * 2021-06-28 2023-05-12 上海迈外迪网络科技有限公司 Method and device for determining actual position of object in fisheye camera image
CN114972509B (en) * 2022-05-26 2023-09-29 北京利君成数字科技有限公司 Method for quickly identifying tableware position

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108106627A (en) * 2017-12-20 2018-06-01 清华大学 A kind of monocular vision vehicle positioning method of the online dynamic calibration of distinguished point based
CN109359553A (en) * 2018-09-21 2019-02-19 上海小萌科技有限公司 Commodity detection method, device, computer equipment and the storage medium of fish eye images
CN109446909A (en) * 2018-09-27 2019-03-08 山东省科学院自动化研究所 A kind of monocular ranging auxiliary parking system and method
CN109479082A (en) * 2016-12-21 2019-03-15 华为技术有限公司 Image processing method and device
CN109741241A (en) * 2018-12-26 2019-05-10 斑马网络技术有限公司 Processing method, device, equipment and the storage medium of fish eye images
CN109977952A (en) * 2019-03-27 2019-07-05 深动科技(北京)有限公司 Candidate target detection method based on local maximum
CN110231013A (en) * 2019-05-08 2019-09-13 哈尔滨理工大学 A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods
CN110414559A (en) * 2019-06-26 2019-11-05 武汉大学 The construction method and commodity recognition method of intelligence retail cabinet commodity target detection Unified frame

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109479082A (en) * 2016-12-21 2019-03-15 华为技术有限公司 Image processing method and device
CN108106627A (en) * 2017-12-20 2018-06-01 清华大学 A kind of monocular vision vehicle positioning method of the online dynamic calibration of distinguished point based
CN109359553A (en) * 2018-09-21 2019-02-19 上海小萌科技有限公司 Commodity detection method, device, computer equipment and the storage medium of fish eye images
CN109446909A (en) * 2018-09-27 2019-03-08 山东省科学院自动化研究所 A kind of monocular ranging auxiliary parking system and method
CN109741241A (en) * 2018-12-26 2019-05-10 斑马网络技术有限公司 Processing method, device, equipment and the storage medium of fish eye images
CN109977952A (en) * 2019-03-27 2019-07-05 深动科技(北京)有限公司 Candidate target detection method based on local maximum
CN110231013A (en) * 2019-05-08 2019-09-13 哈尔滨理工大学 A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods
CN110414559A (en) * 2019-06-26 2019-11-05 武汉大学 The construction method and commodity recognition method of intelligence retail cabinet commodity target detection Unified frame

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Jiangcheng Zhu 等."Downside hemisphere object detection and localization of MAV by fisheye".《IEEE》.全文. *
Jun Zhu 等."Object detection and localization in 3D environment by fusing raw fisheye image and attitude data".《Journal of communication and image representation》.2019,全文. *
宋子豪."基于双目立体视觉的汽车测距避障和目标识别研究".《中国优秀硕士学位论文信息科技辑》.2019,全文. *

Also Published As

Publication number Publication date
CN110889829A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN110889829B (en) Monocular distance measurement method based on fish eye lens
CN109035320B (en) Monocular vision-based depth extraction method
CN111179358B (en) Calibration method, device, equipment and storage medium
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109003311B (en) Calibration method of fisheye lens
CN108122191B (en) Method and device for splicing fisheye images into panoramic image and panoramic video
CN111429533B (en) Camera lens distortion parameter estimation device and method
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
CN108470356B (en) Target object rapid ranging method based on binocular vision
CN111709985B (en) Underwater target ranging method based on binocular vision
CN111260539B (en) Fish eye pattern target identification method and system thereof
CN109859137B (en) Wide-angle camera irregular distortion global correction method
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN110554356A (en) Equipment positioning method and system in visible light communication
CN112016478A (en) Complex scene identification method and system based on multispectral image fusion
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN107680035B (en) Parameter calibration method and device, server and readable storage medium
CN112924037A (en) Infrared body temperature detection system and detection method based on image registration
CN111598956A (en) Calibration method, device and system
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN115439541A (en) Glass orientation calibration system and method for refraction imaging system
CN108174054B (en) Panoramic motion detection method and device
CN112364793A (en) Target detection and fusion method based on long-focus and short-focus multi-camera vehicle environment
CN111833384A (en) Method and device for quickly registering visible light and infrared images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant