CN107560592A - A kind of precision ranging method for optronic tracker linkage target - Google Patents

A kind of precision ranging method for optronic tracker linkage target Download PDF

Info

Publication number
CN107560592A
CN107560592A CN201710718983.6A CN201710718983A CN107560592A CN 107560592 A CN107560592 A CN 107560592A CN 201710718983 A CN201710718983 A CN 201710718983A CN 107560592 A CN107560592 A CN 107560592A
Authority
CN
China
Prior art keywords
target
optronic
point
matching
video camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710718983.6A
Other languages
Chinese (zh)
Other versions
CN107560592B (en
Inventor
贾会梅
吴壮志
张锐
秦建峰
梁涛
白晓波
王向阳
王李凡
刘洋
徐妙语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Costar Group Co Ltd
Original Assignee
Henan Costar Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Costar Group Co Ltd filed Critical Henan Costar Group Co Ltd
Priority to CN201710718983.6A priority Critical patent/CN107560592B/en
Publication of CN107560592A publication Critical patent/CN107560592A/en
Application granted granted Critical
Publication of CN107560592B publication Critical patent/CN107560592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Abstract

The invention discloses a kind of precision ranging method for optronic tracker linkage target, by being demarcated to multiple optronic tracker video cameras;Moving object detection is carried out to each optronic tracker video image, and extracts the SIFT feature of object region;Different optronic tracker extracting target from video image characteristic points are matched;According to camera parameter and the characteristic point of matching to carrying out three-dimensional reconstruction;Finally, average distance that Three-dimensional Gravity is laid foundations to video camera is calculated as target range.Compared with prior art, the present invention uses passive ranging method, using the station target acquisition of land and sea border defense video monitoring as background, based on binocular stereo vision principle, ranging is carried out using multiple optronic trackers collection video image at land and sea border defense video monitoring station, there is the characteristics of cost is low, be easy to carry, strong antijamming capability, easy and effective method.

Description

A kind of precision ranging method for optronic tracker linkage target
Technical field
The present invention relates to technical field of video monitoring, and in particular to a kind of accurate survey for optronic tracker linkage target Away from method.
Background technology
In the struggle such as modern military operation, anti-terrorism, anti-smuggling, fast and accurately identifying and position target of investication turns into people The matter of utmost importance be concerned about, and the accurate estimation of target range is an important step, and there is important military and civilian to be worth. The method species of object ranging is a lot, according to whether need externally to measurand apply energy can be divided into initiative range measurement and by Dynamic two kinds of ranging.White light projection target of the initiative range measurement method using wave beams such as laser or with certain texture structure, passes through analysis The propagation time of light beam or the texture deformation of reflected light determine target range, such as laser ranging, Structure light method, Moire fringe Method, phase comparator etc..Its advantage is that accuracy is higher;Shortcoming is that the various ripples of transmitting may cause injury target or exposure itself Target, while equipment complexity, not Portable belt, cost height etc..Passive ranging method then directly passes through the video image information of target To estimate target range.Passive ranging method includes stereoscopic vision, motion telemetry, monocular ranging etc..
China's boundary line on land and coastline have established all standing, round-the-clock, the video prison of 24 hours substantially at present Control network.If key sections on road, the video monitoring website of marine key area can be built, then integrated use white light, it is infrared, A variety of detection means such as radar, laser, it will substantially increase land and sea border defense information obtaining ability, have to strengthening land and sea border defense management and control Highly important meaning.
The content of the invention
To solve drawbacks described above, it is an object of the invention to provide a kind of precision ranging for optronic tracker linkage target Method, input cost is low, and strong antijamming capability, method is easy and effective, and results contrast is accurate.
To reach above-mentioned purpose, the technical solution adopted by the present invention is:A kind of essence for optronic tracker linkage target True distance-finding method, it is characterised in that comprise the following steps:
1)Demarcate optronic tracker video camera:Rower is entered using Zhang Zhengyou methods respectively to optronic tracker video camera C1 and C2 It is fixed;According to national forest park in Xiaokeng principle, the relation established between camera review location of pixels and scene point location;According to shooting Machine model, the model parameter of video camera is solved by the image coordinate and world coordinates of known scaled point, obtains video camera C1 and C2 Inside and outside parameter;
2)IMAQ and moving object detection:The video image I1 and I2 of two optronic trackers of synchronous acquisition, using background Calculus of finite differences, motion detection is carried out to I1 and I2, and current frame image and background image are carried out into mathematic interpolation, obtain error image, Background and moving target are split by establishing the mixed Gauss model of background again, extract motion target area A1 and A2;
3)Extract region A1 and A2 feature:Using SIFT feature extraction algorithm, motion target area A1 and A2 are extracted respectively SIFT feature point set P1 and P2;A series of steps are determined by metric space extremum extracting, the positioning of key point, key point direction Suddenly, the Feature Descriptor of 128 dimensional vectors is generated;
4)Characteristic matching:Using basis matrix as model, using RANSAC parameter estimation algorithms, by slightly matching and essence matching two The individual stage completes characteristic matching, obtains the matching double points set S in P1 and P2.
5)Three-dimensional reconstruction:After obtaining correct matching double points set S, by calibrated camera model, using SVD side Method carries out the three-dimensional reconstruction of characteristic point, rebuilds three-dimensional point set corresponding to matching double points set;
6)Distance calculates:After completing three-dimensional reconstruction, a three-dimensional point set Q on moving target is obtained, calculate in Q each point to imaging The average distance of machine C1 or C2 photocentre is accurate distance of the target to video camera.
Preferably, step 3)In SIFT feature extraction lazy weight in the case of, can add
Harris characteristic points make reconstructed results more prominent in detail, are more nearly real-world object as supplement.
The present invention uses passive ranging method, using the station target acquisition of land and sea border defense video monitoring as background, based on binocular solid Visual theory, video image is gathered to carry out ranging using multiple optronic trackers at land and sea border defense video monitoring station, there is cost It is low, be easy to carry, strong antijamming capability, easy and effective method the characteristics of.
Compared with prior art, target ranging method proposed by the present invention has the advantages that:
1)Passive ranging method directly estimates target range by the video image information of target, has hidden, no electromagnetic wave It is the advantages that interference, safe in strategic point defence field;
2)Compared with existing Technique of Initiative Range Measurement, on the premise of ranging distance and precision meet to require, use cost is lower, Method is more easy and effective;
3)The three-dimensional reconstruction of binocular stereo vision, the visual processes scenery mode of the mankind is simulated, can be in a variety of conditions The lower steric information for perceiving three-dimensional scenic, easy to operate, measurement distance accuracy is high, can be widely applied to strategic point defence, security protection The fields such as monitoring.
Brief description of the drawings
Below according to drawings and Examples, the feature of the present invention is further described.
Fig. 1 is the workflow schematic diagram of the present invention.
Fig. 2 is the data flowchart that optronic tracker is demarcated in the present invention.
Fig. 3 is the data flowchart of object ranging in the present invention.
Embodiment
It is a kind of embodiment of the present invention referring to accompanying drawing 1, Fig. 2 and Fig. 3.
The present embodiment carries out object ranging, particular technique implementation bag using the three-dimensional reconstruction principle of binocular stereo vision Include following steps:
1)The camera calibration stage:Rower is entered using Zhang Zhengyou plane references method respectively to optronic tracker video camera C1 and C2 It is fixed.Detailed process:Camera plane target, video camera and plane target drone can move freely through video camera from different perspectives, mark Video camera internal reference is kept constant during fixed;Detect the characteristic point in image;It is public according to the general principle of Zhang Zhengyou standardizations Formula calculates the intrinsic parameter of video camera and outer parameter;Camera lens have distortion, and the parameter calculated is entered as initial value Row Optimizing Search, obtains distortion factor, so as to calculate the exact value of all parameters, shown in specific data flowchart accompanying drawing 2;
2)The three-dimensional reconstruction stage:After camera calibration, the match point simultaneous equations of extraction at least two images, Three-dimensional Gravity is carried out Build, mainly including the steps such as IMAQ, target detection, SIFT feature extraction, characteristic matching, characteristic point three-dimensional reconstruction, data Flow chart is as shown in Figure 3.Detailed process:
A. moving target is obtained using background subtraction:Using the mathematic interpolation of adjacent two field picture, then by establishing the mixed of background Close Gauss model to split background and moving target, realize the detection of Moving Targets in Sequent Images;
B. the characteristic point of image is obtained using SIFT feature extraction algorithm:Metric space extremum extracting is first passed through, according to X-Y scheme Picture and the convolution of Gaussian kernel obtain the space representation under different scale, establish image DOG pyramids;Then characteristic point is determined Position, by each characteristic point extracted compared with 26 characteristic points around it, Local Extremum is extracted, then by fitting three-dimensional two Secondary function, the position and yardstick of characteristic point are accurately positioned, while reject skirt response point and the low point of contrast, enhancing matching Stability;Meanwhile to ensure the rotational invariance of characteristic point, choose ladder of each characteristic point in the range of 3 σ in Gaussian image Degree and direction, the statistics of column hisgram is entered to it, determines characteristic point direction;Finally, in metric space where characteristic point The gradient information in its 8 directions of 4*4 window calculation, the Feature Descriptor of 128 dimensional vectors can be generated.
3)After the SIFT feature point set P1 and P2 that extract two images target area A1 and A2, calculated using improved RANSAC Method carries out characteristic matching:To ensure matching precision, completed using thick matching and essence two stages of matching.
Thick matching stage:Using the arest neighbors matching algorithm based on Euclidean distance(Nearest Neighbor-NN):
Index construction is established using kd-tree to feature point set P2;Then, to each, kd-tree is utilized in P2 Scan for, search arest neighbors characteristic vector b1 and secondary neighbour characteristic vector b2 corresponding with a, if arest neighbors divided by it is secondary closely Adjacent distance ratio is less than some given proportion threshold value, i.e.,(T value in experiment For 0.6), then b1 is a matching;Otherwise it is assumed that a is not matched in P2, may be matched a little to collection by thick matching process Close S1;
Smart matching stage:Using Epipolar geometry restriction relation, reject the false matches in S1, due to Same Scene is shot two Any one matching double points in width imageMeet Epipolar geometry(Epipolar Geometry)Restriction relation, and to extremely several What restriction relation can mathematically use fundamental matrix(Fundamental matrix)To represent.It is basic between two images Matrix F is the unusual square formation of order 2 of one 3 × 3, has 7 frees degree;To the matching double points (a, b) of two images, meet aTFb =0;A linear equation of the unknown element on F can be provided due to each group of matching double points, if there is foot in two images Enough match points (﹥ 7), it is possible to unknown fundamental matrix F is calculated using their Simultaneous Equations, solves basis matrix sheet Invention is using improved 8 methods.
Wherein the criterion of false matches is in RANSAC algorithms:The basis matrix estimated according to 8 methods, calculate special The vertical range of sign point to corresponding polar curve is used as constraint(Less than 1.5 pixels), the matching of restriction relation will be unsatisfactory for, be considered as void Vacation matching, also referred to as outlier(Correct matching double points are referred to as inlier).RANSAC algorithms are thought, as long as carrying out N groups at random Sampling(N is sufficiently large), just can be in certain confidence level P(Typically take 0.99)It is lower guarantee at least one group of sampling be not by Pollution, minimum frequency in sampling N meets following condition:
In formulaIt is data error rate, k is to solve for the minimal data needed for model.
False matches rejecting process is in RANSAC algorithms:
I) according to confidence level and data error rate, the sampling number N for needing to carry out is calculated;
II) 8 groups of matching double points are extracted uniformly randomly from thick matching double points collection S1, using improved 8 methods, calculated near Like fundamental matrix Fran;
III) Fran is verified with each group of corresponding points not being drawn into, if polar curve is apart from sufficiently small, then it is assumed that this, which is corresponded to, is The correct matching double points consistent with Fran, recalculate data error rate and update N;
IV) the IIth and the IIIth step is repeated, the processing until completing the sampling of N groups;
V) according to maximum correct match point logarithm and minimal error variance, selected from multiple approximate fundamental matrix Fran optimal Parameter model FR;
VI) correct matching double points set S is calculated using FR, and final fundamental matrix F is recalculated according to S.
4)After obtaining correct matching double points set S, due to two optronic tracker video cameras C1 and C2 inside and outside parameter It has been demarcated that, according to Feature Points Matching result, three-dimensional point set corresponding to reconstruction.
Three dimensional field sight spotWith the picture point in two-dimentional image planeMeet central projection relation, as shown in following formula:
Wherein:K is referred to as camera Intrinsic Matrix.Because parameter is only relevant with the internal structure of camera in K;[R | t] it is video camera Outer parameter matrix, describes the position relationship between camera coordinate system and world coordinate system, and P=K [R | t] is is referred to as projection square Battle array.
To any match point, a and b are substituted into above formula respectively, after obtain one contact equation group Three-dimensional point M corresponding to (a, b) can be solved, because (a, b) is matching double points, equation group has unique solution in theory, but in reality In the application of border, due to the influence of calibrated error etc., it can may be determined most using the method that SVD is decomposed without the solution determined Excellent solution;
5)Target range calculation stages:After carrying out three-dimensional reconstruction to each pair match point in matching double points set, obtain moving mesh The three-dimensional point set put on, obtain the average distance for arriving video camera photocentre a little in three-dimensional point set just obtained target away from From.
Using pedestrian as moving target in the present embodiment, linkage test is carried out using two optronic trackers similarly configured: One optronic tracker is demarcated according to the Zhang Zhengyou methods in camera calibration stage, it is insured in video camera in calibration process Hold constant, obtain the inside and outside parameter of optronic tracker.Then object ranging is carried out, it is assumed that optronic tracker is motionless, to moving mesh Mark is detected, and captures two width target images, carries out extracting and matching feature points, and the 3D that target is obtained by three-dimensional reconstruction is dilute Point cloud is dredged, obtains the average distance for arriving video camera photocentre a little in three-dimensional point set.Test result indicates that proposed Location algorithm is a kind of feasible passive.
Described above is only presently preferred embodiments of the present invention, and above-mentioned specific embodiment is not limitation of the present invention, Retouching, modification or the equivalent substitution that all one of ordinary skill in the art are made as described above, belong to the guarantor of the present invention Protect scope.

Claims (2)

1. a kind of precision ranging method for optronic tracker linkage target, coordinated using two optronic trackers, detected After moving target information simultaneously capture movement target, the 3-D view of moving target is established based on three-dimensional reconstruction principle, is calculated Three-dimensional point obtains target to the accurate distance of video camera to the average distance of video camera photocentre, it is characterised in that including following step Suddenly:
1)Demarcate optronic tracker video camera:Rower is entered using Zhang Zhengyou methods respectively to optronic tracker video camera C1 and C2 It is fixed;According to national forest park in Xiaokeng principle, the relation established between camera review location of pixels and scene point location;According to shooting Machine model, the model parameter of video camera is solved by the image coordinate and world coordinates of known scaled point, obtains video camera C1 and C2 Inside and outside parameter;
2)IMAQ and moving object detection:The video image I1 and I2 of two optronic trackers of synchronous acquisition, using background Calculus of finite differences, motion detection is carried out to I1 and I2, and current frame image and background image are carried out into mathematic interpolation, obtain error image, Background and moving target are split by establishing the mixed Gauss model of background again, extract motion target area A1 and A2;
3)Extract region A1 and A2 feature:Using SIFT feature extraction algorithm, motion target area A1 and A2 are extracted respectively SIFT feature point set P1 and P2;A series of steps are determined by metric space extremum extracting, the positioning of key point, key point direction Suddenly, the Feature Descriptor of 128 dimensional vectors is generated;
4)Characteristic matching:Using basis matrix as model, using RANSAC parameter estimation algorithms, by slightly matching and essence matching two The individual stage completes characteristic matching, obtains the matching double points set S in P1 and P2;
5)Three-dimensional reconstruction:After obtaining correct matching double points set S, by calibrated camera model, entered using SVD methods The three-dimensional reconstruction of row characteristic point, rebuild three-dimensional point set corresponding to matching double points set;
6)Distance calculates:After completing three-dimensional reconstruction, a three-dimensional point set Q on moving target is obtained, calculate in Q each point to imaging The average distance of machine C1 or C2 photocentre is accurate distance of the target to video camera.
2. the precision ranging method according to claim 1 for optronic tracker linkage target, it is characterised in that:Step 3)In SIFT feature extraction lazy weight in the case of, can add Harris characteristic points as supplement.
CN201710718983.6A 2017-08-21 2017-08-21 Precise distance measurement method for photoelectric tracker linkage target Active CN107560592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710718983.6A CN107560592B (en) 2017-08-21 2017-08-21 Precise distance measurement method for photoelectric tracker linkage target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710718983.6A CN107560592B (en) 2017-08-21 2017-08-21 Precise distance measurement method for photoelectric tracker linkage target

Publications (2)

Publication Number Publication Date
CN107560592A true CN107560592A (en) 2018-01-09
CN107560592B CN107560592B (en) 2020-08-18

Family

ID=60976143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710718983.6A Active CN107560592B (en) 2017-08-21 2017-08-21 Precise distance measurement method for photoelectric tracker linkage target

Country Status (1)

Country Link
CN (1) CN107560592B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108333591A (en) * 2018-01-18 2018-07-27 中国科学院苏州纳米技术与纳米仿生研究所 A kind of distance measuring method and its system
CN109827547A (en) * 2019-03-27 2019-05-31 中国人民解放军战略支援部队航天工程大学 A kind of distributed multi-sensor extraterrestrial target synchronization association method
CN110197104A (en) * 2018-02-27 2019-09-03 杭州海康威视数字技术股份有限公司 Distance measuring method and device based on vehicle
CN111046842A (en) * 2019-12-27 2020-04-21 京东数字科技控股有限公司 Method and apparatus for changing information
CN111080685A (en) * 2019-12-17 2020-04-28 北京工业大学 Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN113382223A (en) * 2021-06-03 2021-09-10 浙江大学 Underwater three-dimensional imaging device and method based on multi-view vision
CN113484864A (en) * 2021-07-05 2021-10-08 中国人民解放军国防科技大学 Unmanned ship-oriented navigation radar and photoelectric pod collaborative environment sensing method
CN115661702A (en) * 2022-10-13 2023-01-31 华中科技大学 Sea condition real-time estimation method and system based on smart phone

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102589516A (en) * 2012-03-01 2012-07-18 长安大学 Dynamic distance measuring system based on binocular line scan cameras
CN102914294A (en) * 2012-09-10 2013-02-06 中国南方电网有限责任公司超高压输电公司天生桥局 System and method for measuring unmanned aerial vehicle electrical line patrol on basis of images
CN104677330A (en) * 2013-11-29 2015-06-03 哈尔滨智晟天诚科技开发有限公司 Small binocular stereoscopic vision ranging system
CN104819705A (en) * 2014-04-23 2015-08-05 西安交通大学 Binocular measurement apparatus based on variable vision range target search, and use method thereof
CN104883556A (en) * 2015-05-25 2015-09-02 深圳市虚拟现实科技有限公司 Three dimensional display method based on augmented reality and augmented reality glasses

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102589516A (en) * 2012-03-01 2012-07-18 长安大学 Dynamic distance measuring system based on binocular line scan cameras
CN102914294A (en) * 2012-09-10 2013-02-06 中国南方电网有限责任公司超高压输电公司天生桥局 System and method for measuring unmanned aerial vehicle electrical line patrol on basis of images
CN104677330A (en) * 2013-11-29 2015-06-03 哈尔滨智晟天诚科技开发有限公司 Small binocular stereoscopic vision ranging system
CN104819705A (en) * 2014-04-23 2015-08-05 西安交通大学 Binocular measurement apparatus based on variable vision range target search, and use method thereof
CN104883556A (en) * 2015-05-25 2015-09-02 深圳市虚拟现实科技有限公司 Three dimensional display method based on augmented reality and augmented reality glasses

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108333591A (en) * 2018-01-18 2018-07-27 中国科学院苏州纳米技术与纳米仿生研究所 A kind of distance measuring method and its system
CN110197104A (en) * 2018-02-27 2019-09-03 杭州海康威视数字技术股份有限公司 Distance measuring method and device based on vehicle
CN109827547A (en) * 2019-03-27 2019-05-31 中国人民解放军战略支援部队航天工程大学 A kind of distributed multi-sensor extraterrestrial target synchronization association method
CN109827547B (en) * 2019-03-27 2021-05-04 中国人民解放军战略支援部队航天工程大学 Distributed multi-sensor space target synchronous correlation method
CN111080685A (en) * 2019-12-17 2020-04-28 北京工业大学 Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN111046842A (en) * 2019-12-27 2020-04-21 京东数字科技控股有限公司 Method and apparatus for changing information
CN111046842B (en) * 2019-12-27 2022-02-15 京东科技控股股份有限公司 Method and apparatus for changing information
CN113382223A (en) * 2021-06-03 2021-09-10 浙江大学 Underwater three-dimensional imaging device and method based on multi-view vision
CN113484864A (en) * 2021-07-05 2021-10-08 中国人民解放军国防科技大学 Unmanned ship-oriented navigation radar and photoelectric pod collaborative environment sensing method
CN115661702A (en) * 2022-10-13 2023-01-31 华中科技大学 Sea condition real-time estimation method and system based on smart phone

Also Published As

Publication number Publication date
CN107560592B (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
CN104376552B (en) A kind of virtual combat method of 3D models and two dimensional image
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN110378931A (en) A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
CN109034018A (en) A kind of low latitude small drone method for barrier perception based on binocular vision
CN106529495A (en) Obstacle detection method of aircraft and device
CN108230392A (en) A kind of dysopia analyte detection false-alarm elimination method based on IMU
CN108805906A (en) A kind of moving obstacle detection and localization method based on depth map
CN107657644B (en) Sparse scene flows detection method and device under a kind of mobile environment
CN105654547B (en) Three-dimensional rebuilding method
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN105258673B (en) A kind of target ranging method based on binocular synthetic aperture focusing image, device
CN110969667A (en) Multi-spectrum camera external parameter self-correction algorithm based on edge features
CN110956661A (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN111998862B (en) BNN-based dense binocular SLAM method
CN104182968A (en) Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN113643345A (en) Multi-view road intelligent identification method based on double-light fusion
CN116258817B (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
CN106295657A (en) A kind of method extracting human height's feature during video data structure
CN116071424A (en) Fruit space coordinate positioning method based on monocular vision
Savoy et al. Cloud base height estimation using high-resolution whole sky imagers
CN115359130A (en) Radar and camera combined calibration method and device, electronic equipment and storage medium
CN113160210A (en) Drainage pipeline defect detection method and device based on depth camera
CN109029380B (en) Stereo visual system and its calibration distance measuring method based on film coated type multispectral camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant