CN105894499A - Binocular-vision-based rapid detection method for three-dimensional information of space object - Google Patents
Binocular-vision-based rapid detection method for three-dimensional information of space object Download PDFInfo
- Publication number
- CN105894499A CN105894499A CN201610182500.0A CN201610182500A CN105894499A CN 105894499 A CN105894499 A CN 105894499A CN 201610182500 A CN201610182500 A CN 201610182500A CN 105894499 A CN105894499 A CN 105894499A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- edge
- dimensional information
- binocular vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
Abstract
The invention discloses a binocular-vision-based rapid detection method for three-dimensional information of a space object. The method comprises: (1). Two cameras are calibrated to obtain an internal parameter matrix and an external parameter matrix; (2), the two cameras collect an image of a measured object simultaneously; (3), according to the calibration result, a re-projection matrix is calculated and three-dimensional correction is carried out on the image; (4), filter and smoothing processing is carried out on the corrected image and edge extraction is carried out on the filtered image; (5), on the basis of an improved edge-feature-based rapid matching algorithm, three-dimensional matching is carried out on the image to obtain a plurality of matching point pairs; and (6), according to the internal and external parameter matrixes obtained by camera calibration, the re-projection matrix, and a parallax principle, three-dimensional information of the measured object is restored. According to the invention, the traditional feature-point-based matching algorithm is improved. The original precision is kept and the matching speed is increased; and real-time three-dimensional information detection can be realized.
Description
Technical field
The present invention relates to field of machine vision, be specifically related to a kind of space based on binocular vision thing
Body three-dimensional information method for quick.
Background technology
Commonly use in three-dimensional measurement at present is the sensors such as laser, radar, sonar, but due to
The Cleaning Principle of the sensor is all based on pulse ranging, so noise is the quickest to external world
Sense, is easily subject to magnetic interference and error detection occurs, and price comparison is high.Same ultrasound wave,
The sensor ratio such as radar relatively, are not likely to produce interference between vision sensor, can obtain rapidly big
The image information of amount, even the minutia of target, is relatively low in price, so adopting
The focus studied at present when carrying out three-dimensional measurement with vision technique.
Binocular vision is a big branch of machine vision, owing to it can simulate human eye, to three-dimensional
There is the function of three-dimensional perception in the world, along with binocular vision research is deepened continuously, in recent years its
Quickly grow, be increasingly widely used in every field, as robot obstacle-avoiding, workpiece location,
Visual token and virtual reality etc..
Stereo matching is emphasis and the difficult point of binocular vision technology.The most a lot of matching algorithms can obtain
To dense matching double points, good three-dimensional reconstruction effect can be obtained, but these algorithms are universal
There is the problem that real-time is the highest, this also causes whole three-dimensional measurement to take considerable time, right
It is difficult to meet requirement in the occasion that a lot of requirement of real-times are higher;And some matching algorithm is simple
Only a small amount of characteristic point is mated to pursue real-time, it is difficult to recover showing of testee
The information such as shape size.
Summary of the invention
For above-mentioned technical problem, the present invention provides a kind of space object based on binocular vision three
Dimension information method for quick, can overcome equipment price that current three-dimensional measurement exists expensive with
And the problem that real-time and matching precision cannot meet simultaneously, can apply to industrial robot and make
The fields such as the three-dimensional information detection of industry environment and the dimensional measurement of workpiece.
For achieving the above object, the technical scheme is that
A kind of based on being the space object three-dimensional information method for quick of binocular vision, use two
The individual video camera being placed on The Cloud Terrace with forward direction parallel mode and a PC are constituted binocular vision
System, comprises the following steps:
(1) two video cameras are demarcated, obtain Intrinsic Matrix and outer parameter matrix;
(2) two video cameras gather the image of testee simultaneously, and by image to being sent to
PC;
(3) re-projection matrix is calculated according to calibration result, and to image to carrying out three-dimensional correction,
Make image to realizing coplanar and capable alignment;
(4) image after correction is filtered smoothing processing, to remove the interference of noise, so
Afterwards to filtered image to carrying out edge extracting, obtain corresponding respectively to left figure and a left side for right figure
Edge graph and right hand edge figure;
(5) by the Fast Match Algorithm based on edge feature improved to image to carrying out three-dimensional
Join, obtain some matching double points;
(6) inside and outside parameter obtained according to camera calibration and re-projection matrix and principle of parallax are extensive
Appear again the three-dimensional information of testee.
Described step (1) needs first to set up binocular vision before demarcating two video cameras three-dimensional
Imaging model, and obtained spatial point in world coordinate system and pass in camera coordinates by model
System.Binocular vision three-dimensional imaging model is generally divided into and does not considers the linear model of distortion and examine at present
Consider the nonlinear model of distortion.
Further, described step (3) uses Bouguet algorithm to image to carrying out solid
Correction.Owing to perfect align structures there's almost no in real biocular systems, take the photograph for two
The hardly possible imaging plane having the most coplanar and capable alignment of camera, it is therefore desirable to three-dimensional school
Just make image to realizing coplanar and row alignment.There is a lot of algorithm can realize three-dimensional school at present
Just, conventional is Hartley algorithm and Bouguet algorithm, although Hartley algorithm can lead to
Stereochemical structure is derived in the motion crossing single camera record, but it is abnormal to produce more image
Becoming, therefore the present invention uses Bouguet algorithm to carry out three-dimensional correction.
Further, described step (4) is filtered into 2-d gaussian filters.Described step
(4) purpose of the filtering in is to remove the interference of noise, and the most conventional filtering method has height
This filtering, mean filter and medium filtering etc., the present invention uses 2-d gaussian filters, to obtain
Obtain noise reduction well.
Further, the Boundary extracting algorithm in described step (4) is Canny algorithm.Limit
Edge extraction is the basis of Stereo matching, and current Boundary extracting algorithm has a lot, it is contemplated that real-time
And accuracy, therefore select Canny algorithm.
Further, the Rapid matching based on edge feature of the improvement in described step (5)
Algorithm comprises the following steps:
(51) with the source images that the left figure of image pair is coupling, right figure is the target image of coupling,
Point on left figure edge is classified, obtains some points to be matched;
(52) find the minimum area-encasing rectangle of testee edge line, reduce and marginal point is searched
Rope seeking scope;
(53) draw initial parallax scope according to operating distance scope, use epipolar-line constraint root
On zoomed image, treat match point according to Matching power flow function slightly to mate, contracted further
Little disparity range;
(54) by the disparity range reduced original image on use epipolar-line constraint according to change
Point to be matched on edge is mated by the Matching power flow function of window.
Further, in described step (51), on edge, the classification foundation of point is this eight neighborhood
On the distribution situation of value of pixel, including step: first by left hand edge figure image binaryzation,
If the eight neighborhood of certain point has and the most vertically top is 1 with the value of the vertically point of lower section on edge,
Then this point is defined as longitudinal point;If on edge eight neighborhood of certain point have and only on the left of level and
The value of the point on the right side of level is 1, then this point is defined as crosswise spots;The point of other situations is fixed
Justice is characterized a little.
Further, described step (52) find the minimum of testee edge line to surround square
The step of shape specifically includes: uses image pyramid and carries out interlacing scan, thus quickly obtaining
Edge contour go up most the rightest the most left four points under;In view of using image pyramid and interlacing
Scanning is likely to occur the situation of missing inspection, is added by obtain four points or deducts a setting value just
Available minimum area-encasing rectangle, reduces the search seeking scope to marginal point.
Further, the Matching power flow function in described step (53) and step (54) is respectively
For Census algorithm and the SAD algorithm of change window, processing speed Census faster is selected to calculate
Method also uses epipolar-line constraint slightly to mate characteristic point on zoomed image, available further
The disparity range reduced;Select absolute error and algorithm SAD as Matching power flow function, support
The size of window changes with the difference of point to be matched.
Further, using characteristic point all as point to be matched in described step (53), for
If continuous print longitudinally point and crosswise spots the most only take first and last 2 and centre is done as point to be matched.
Further, in step (54) the calculation window size of Matching power flow function with to be matched
The difference of vertex type and change: when point to be matched is for longitudinally point and characteristic point, use less
Window carries out Matching power flow calculating, and when point to be matched is crosswise spots, uses bigger window
Carry out Matching power flow calculating, reduce owing to higher the caused impact of error hiding probability occurring,
Original image is tactful to edge according to WTA (winner-takes-all) to upper employing epipolar-line constraint
On point to be matched mate.
Compared to existing technology, the matching algorithm of traditional distinguished point based is changed by the present invention
Enter, while keeping original precision, improve the speed of coupling, it is possible to realize real-time three-dimensional
Infomation detection.
Accompanying drawing explanation
Fig. 1 is the hardware composition schematic diagram needed for the present invention.
Fig. 2 is that the present invention is embodied as flow chart.
Fig. 3 is matching algorithm flow chart of the present invention.
Shown in figure: the left video camera of 1-;The right video camera of 2-;3-The Cloud Terrace;4-PC machine.
Detailed description of the invention
Below by specific embodiment, the purpose of the present invention is described in further detail, implements
Example can not repeat one by one at this, but the most therefore embodiments of the present invention are defined in following enforcement
Example.
As in figure 2 it is shown, a kind of space object three-dimensional information quickly side of detection based on binocular vision
Method, uses two to be placed on the video camera on The Cloud Terrace 3 and a PC with forward direction parallel mode
4 constitute binocular vision systems, and described left video camera 1, right video camera 2 are with the parallel side of forward direction
Formula is fixed on The Cloud Terrace 3, two video cameras with equipped with required programming and the PC that processes software
4 are connected, and collectively form binocular vision detecting system (see Fig. 1).Comprise the following steps:
S1, camera calibration: at present binocular vision three-dimensional imaging model be generally divided into do not consider abnormal
The linear model become and the nonlinear model considering distortion, owing to most of cameras all exist camera lens
Distortion situation, for obtaining result more accurately, the present invention uses distortion model, and by following step
Rapid demarcation:
S11, set up binocular vision distortion model, obtain measured object coordinate in world coordinate system
The computing formula of value;
S12, the gridiron pattern scaling board of employing 300mm × 300mm the mark by Matlab
Determine workbox two video cameras are demarcated respectively, obtain the Intrinsic Matrix of two video cameras
With outer parameter matrix, then carry out stereo calibration with left camera coordinate system for world coordinate system,
Obtain the rotation between two video cameras and translation matrix;
S13, by calibration result preserve;
S2, image acquisition: image acquisition is through the mistake that video camera obtains the image of measured object
Journey, after left and right video camera installs and connects upper PC 4, it is possible to carry out image
Gathering, left and right video camera shoots the measured object image at same position simultaneously, has obtained a pair
Image pair, is then for further processing image to passing to PC 4;
S3, the three-dimensional correction of image: owing to there is installation site when placing two video cameras by mistake
The reasons such as difference, it is impossible to ensure that two cameras are fully achieved forward direction and place at ordinary times, obtain image to also
Coplanar and row alignment cannot be realized, therefore receive, when PC, the image pair that video camera transmission comes
After, need image carrying out three-dimensional correction, specifically comprise the following steps that
S31, according to calibration result and by increase income computer vision process storehouse OpenCV use
Bouguet algorithm obtains left and right correction spin matrix, left and right projection matrix and re-projection matrix
Q;
S32, the left and right correction spin matrix obtained according to step S31 and left and right projection matrix pair
Image, to being corrected mapping, makes image to realizing coplanar and capable alignment;
S4, filtering and noise reduction and edge extracting: owing to there is the interference of various noise when gathering image,
Therefore first to carry out filtering process when extracting the edge of measured object, image is to corrected
Afterwards, use 2-d gaussian filters to image to carrying out denoising, use Canny limit the most again
Edge extraction algorithm to filtered image to carrying out edge extracting;
S5, Stereo matching: as it is shown on figure 3, the Stereo Matching Algorithm of the present embodiment is with edge
Point is as Matching unit, i.e. by the Fast Match Algorithm based on edge feature improved to image
To carrying out Stereo matching, obtain some matching double points, improve coupling by multiple optimization method
Speed, specifically comprise the following steps that
S51, with the left figure of image pair be coupling source images, right figure be coupling target image.
By left hand edge figure binaryzation, if certain eight neighborhood put has and the most vertical top is with vertical on edge
The value of the point of lower section is 1, then this point is defined as longitudinal point;If the eight neighborhood of certain point on edge
Have and the value of only point on the left of level and on the right side of level is 1, then this point is defined as crosswise spots;
The point of other situations is defined as characteristic point;
S52, employing image pyramid and interleaved mode, quickly obtain edge contour
Go up most the rightest the most left four points under;May go out in view of using image pyramid and interlacing scan
The situation of existing missing inspection, four points obtained add or deduct a setting value just available parcel
Enclose rectangle and find the minimum area-encasing rectangle of testee edge line, reduce the search to marginal point and look into
Look for scope;
S53, draw initial parallax scope according to operating distance scope, use epipolar-line constraint root
On zoomed image, the characteristic point in S51 is slightly mated according to Census algorithm, entered
The disparity range that one step reduces;
S54, obtain the disparity range that reduces after, choose edge upper part branch as point to be matched:
Using characteristic point in S51 all as point to be matched, for continuous print longitudinally point and crosswise spots then only
If taking first and last 2 and centre being done as point to be matched.Select absolute error and algorithm SAD
As Matching power flow function, the size of support window changes with the difference of point to be matched: when treating
When match point is longitudinally to put with characteristic point, less window is used to carry out Matching power flow calculating, and
When point to be matched is crosswise spots, higher owing to error hiding probability occurring, therefore use bigger window
Mouthful carry out Matching power flow calculating, original image on use epipolar-line constraint according to WTA
(winner-takes-all) point to be matched on edge is mated by strategy;
S6, three-dimensional information extraction: the inside and outside parameter obtained according to camera calibration and re-projection square
Battle array and principle of parallax recover the three-dimensional information of testee, if the left side in a certain matching double points
The coordinate joined a little in left image coordinate for (x, y), the parallax between the match point of left and right is d,
The re-projection matrix Q obtained by three-dimensional correction again can obtain:
The three-dimensional coordinate of this point is (X/W, Y/W, Z/W), the most i.e. can get on edge some
The three-dimensional coordinate of point, such that it is able to extract the three-dimensional information of measured object.
The above embodiment of the present invention is only for clearly demonstrating example of the present invention, and
It it is not the restriction to embodiments of the present invention.Those of ordinary skill in the field are come
Say, can also make other changes in different forms on the basis of the above description.This
In without also cannot all of embodiment be given exhaustive.All in the spirit and principles in the present invention
Within any amendment, equivalent and the improvement etc. made, should be included in right of the present invention and want
Within the protection domain asked.
Claims (10)
1. a space object three-dimensional information method for quick based on binocular vision, it is special
Levy and be: use two video cameras being placed on The Cloud Terrace with forward direction parallel mode and a PC
Mechanism becomes binocular vision system, comprises the following steps:
(1) use distortion model that two video cameras are demarcated, obtain Intrinsic Matrix with outer
Parameter matrix;
(2) two video cameras gather the image of testee simultaneously, and by image to being sent to
PC;
(3) re-projection matrix is calculated according to calibration result, and to image to carrying out three-dimensional correction,
Make image to realizing coplanar and capable alignment;
(4) to the image after correction to being filtered smoothing processing, then to filtered image
To carrying out edge extracting, obtain corresponding respectively to the left hand edge figure of left figure and right figure and right hand edge
Figure;
(5) by the Fast Match Algorithm based on edge feature improved to image to carrying out three-dimensional
Join, obtain some matching double points;
(6) inside and outside parameter obtained according to camera calibration and re-projection matrix and principle of parallax are extensive
Appear again the three-dimensional information of testee.
Space object three-dimensional information based on binocular vision the most according to claim 1 is quick
Detection method, it is characterised in that: described step (3) uses Bouguet algorithm to image pair
Carry out three-dimensional correction.
Space object three-dimensional information based on binocular vision the most according to claim 1 is fast
Speed detection method, it is characterised in that: described step (4) is filtered into 2-d gaussian filters.
Space object three-dimensional information based on binocular vision the most according to claim 1 is fast
Speed detection method, it is characterised in that: the Boundary extracting algorithm in described step (4) is Canny
Algorithm.
Space object three-dimensional information based on binocular vision the most according to claim 1 is quick
Detection method, it is characterised in that: that improves in described step (5) is based on edge feature fast
Speed matching algorithm comprises the following steps:
(51) with the source images that the left figure of image pair is coupling, right figure is the target image of coupling,
Point on left figure edge is classified, obtains some points to be matched;
(52) find the minimum area-encasing rectangle of testee edge line, reduce and marginal point is searched
Rope seeking scope;
(53) draw initial parallax scope according to operating distance scope, use epipolar-line constraint root
On zoomed image, treat match point according to Matching power flow function slightly to mate, contracted further
Little disparity range;
(54) by the disparity range reduced original image on use epipolar-line constraint according to change
Point to be matched on edge is mated by the Matching power flow function of window.
Space object three-dimensional information based on binocular vision the most according to claim 5 is fast
Speed detection method, it is characterised in that: in described step (51), on edge, the classification foundation of point is
The distribution situation of the value of the pixel on this eight neighborhood, including step: first by left hand edge figure figure
As binaryzation, if on edge there being and the most vertically top and the point of vertical lower section eight neighborhood of certain point
Value be 1, then this point is defined as longitudinal point;If the eight neighborhood of certain point has and only has on edge
The value of the point on the left of level and on the right side of level is 1, then this point is defined as crosswise spots;By other
The point of situation is defined as characteristic point.
Space object three-dimensional information based on binocular vision the most according to claim 5 is fast
Speed detection method, it is characterised in that: described step finds testee edge line in (52)
The step of minimum area-encasing rectangle specifically includes: uses image pyramid and carries out interlacing scan, from
And quickly obtain edge contour go up most the rightest the most left four points under;In view of using image gold
Word tower and interlacing scan are likely to occur the situation of missing inspection, are added by obtain four points or deduct one
Individual setting value just available minimum area-encasing rectangle.
Space object three-dimensional information based on binocular vision the most according to claim 5 is fast
Speed detection method, it is characterised in that: the coupling generation in described step (53) and step (54)
Valency function is respectively Census algorithm and becomes the SAD algorithm of window.
Space object three-dimensional information based on binocular vision the most according to claim 6 is fast
Speed detection method, it is characterised in that: using characteristic point all as treating in described step (53)
Join a little, if point longitudinal for continuous print and crosswise spots the most only take first and last 2 and conduct is done in centre
Point to be matched.
Space object three-dimensional information based on binocular vision the most according to claim 6 is fast
Speed detection method, it is characterised in that: in step (54), the calculation window of Matching power flow function is big
The little difference with vertex type to be matched and change: when point to be matched is for longitudinally point and characteristic point,
Less window is used to carry out Matching power flow calculating, and when point to be matched is crosswise spots, due to
Occur that error hiding probability is higher, therefore use bigger window to carry out Matching power flow calculating.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610182500.0A CN105894499B (en) | 2016-03-25 | 2016-03-25 | A kind of space object three-dimensional information rapid detection method based on binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610182500.0A CN105894499B (en) | 2016-03-25 | 2016-03-25 | A kind of space object three-dimensional information rapid detection method based on binocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105894499A true CN105894499A (en) | 2016-08-24 |
CN105894499B CN105894499B (en) | 2018-09-14 |
Family
ID=57014337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610182500.0A Expired - Fee Related CN105894499B (en) | 2016-03-25 | 2016-03-25 | A kind of space object three-dimensional information rapid detection method based on binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105894499B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651975A (en) * | 2016-12-01 | 2017-05-10 | 大连理工大学 | Census adaptive transformation algorithm based on multiple codes |
CN106683174A (en) * | 2016-12-23 | 2017-05-17 | 上海斐讯数据通信技术有限公司 | 3D reconstruction method, apparatus of binocular visual system, and binocular visual system |
CN106996748A (en) * | 2017-03-16 | 2017-08-01 | 南京工业大学 | A kind of wheel footpath measuring method based on binocular vision |
CN107248159A (en) * | 2017-08-04 | 2017-10-13 | 河海大学常州校区 | A kind of metal works defect inspection method based on binocular vision |
CN107490342A (en) * | 2017-06-30 | 2017-12-19 | 广东工业大学 | A kind of cell phone appearance detection method based on single binocular vision |
CN107588721A (en) * | 2017-08-28 | 2018-01-16 | 武汉科技大学 | The measuring method and system of a kind of more sizes of part based on binocular vision |
CN108127238A (en) * | 2017-12-29 | 2018-06-08 | 南京理工大学 | The method that non-burnishing surface autonomous classification robot increases material forming |
CN108381549A (en) * | 2018-01-26 | 2018-08-10 | 广东三三智能科技有限公司 | A kind of quick grasping means of binocular vision guided robot, device and storage medium |
CN108453739A (en) * | 2018-04-04 | 2018-08-28 | 北京航空航天大学 | Stereoscopic vision positioning mechanical arm grasping system and method based on automatic shape fitting |
CN108520537A (en) * | 2018-03-29 | 2018-09-11 | 电子科技大学 | A kind of binocular depth acquisition methods based on photometric parallax |
CN108596963A (en) * | 2018-04-25 | 2018-09-28 | 珠海全志科技股份有限公司 | Matching, parallax extraction and the extraction of depth information method of image characteristic point |
CN109089100A (en) * | 2018-08-13 | 2018-12-25 | 西安理工大学 | A kind of synthetic method of binocular tri-dimensional video |
CN110009610A (en) * | 2019-03-27 | 2019-07-12 | 仲恺农业工程学院 | A kind of reservoir dam slope protection surface damage visible detection method and bionic device |
CN110223257A (en) * | 2019-06-11 | 2019-09-10 | 北京迈格威科技有限公司 | Obtain method, apparatus, computer equipment and the storage medium of disparity map |
CN110276110A (en) * | 2019-06-04 | 2019-09-24 | 华东师范大学 | A kind of software and hardware cooperating design method of Binocular Stereo Vision System |
CN110543859A (en) * | 2019-09-05 | 2019-12-06 | 大连海事大学 | sea cucumber autonomous recognition and grabbing method based on deep learning and binocular positioning |
CN110595309A (en) * | 2019-10-16 | 2019-12-20 | 深圳市天和时代电子设备有限公司 | Explosive-handling equipment and using method thereof |
CN110672007A (en) * | 2019-09-24 | 2020-01-10 | 佛山科学技术学院 | Workpiece surface quality detection method and system based on machine vision |
CN111145254A (en) * | 2019-12-13 | 2020-05-12 | 上海新时达机器人有限公司 | Door valve blank positioning method based on binocular vision |
CN111345023A (en) * | 2017-11-03 | 2020-06-26 | 深圳市柔宇科技有限公司 | Image jitter elimination method, device, terminal and computer readable storage medium |
CN111429571A (en) * | 2020-04-15 | 2020-07-17 | 四川大学 | Rapid stereo matching method based on spatio-temporal image information joint correlation |
CN111429532A (en) * | 2020-04-30 | 2020-07-17 | 南京大学 | Method for improving camera calibration accuracy by utilizing multi-plane calibration plate |
CN111563952A (en) * | 2020-03-30 | 2020-08-21 | 北京理工大学 | Method and system for realizing stereo matching based on phase information and spatial texture characteristics |
CN116129037A (en) * | 2022-12-13 | 2023-05-16 | 珠海视熙科技有限公司 | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof |
CN117315033A (en) * | 2023-11-29 | 2023-12-29 | 上海仙工智能科技有限公司 | Neural network-based identification positioning method and system and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982334A (en) * | 2012-11-05 | 2013-03-20 | 北京理工大学 | Sparse parallax obtaining method based on target edge features and gray scale similarity |
CN103134477A (en) * | 2013-01-31 | 2013-06-05 | 南昌航空大学 | Helicopter rotor blade motion parameter measuring method based on binocular three-dimensional vision |
-
2016
- 2016-03-25 CN CN201610182500.0A patent/CN105894499B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982334A (en) * | 2012-11-05 | 2013-03-20 | 北京理工大学 | Sparse parallax obtaining method based on target edge features and gray scale similarity |
CN103134477A (en) * | 2013-01-31 | 2013-06-05 | 南昌航空大学 | Helicopter rotor blade motion parameter measuring method based on binocular three-dimensional vision |
Non-Patent Citations (2)
Title |
---|
熊京京: "基于双目视觉的直升机旋翼桨叶共锥度检测方法研究", 《中国学位论文全文数据库》 * |
胡汉平,朱明: "基于种子点传播的快速立体匹配", 《光学精密工程》 * |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651975B (en) * | 2016-12-01 | 2019-08-13 | 大连理工大学 | A kind of Census adaptive transformation method based on odd encoder |
CN106651975A (en) * | 2016-12-01 | 2017-05-10 | 大连理工大学 | Census adaptive transformation algorithm based on multiple codes |
CN106683174A (en) * | 2016-12-23 | 2017-05-17 | 上海斐讯数据通信技术有限公司 | 3D reconstruction method, apparatus of binocular visual system, and binocular visual system |
CN106996748A (en) * | 2017-03-16 | 2017-08-01 | 南京工业大学 | A kind of wheel footpath measuring method based on binocular vision |
CN107490342A (en) * | 2017-06-30 | 2017-12-19 | 广东工业大学 | A kind of cell phone appearance detection method based on single binocular vision |
CN107248159A (en) * | 2017-08-04 | 2017-10-13 | 河海大学常州校区 | A kind of metal works defect inspection method based on binocular vision |
CN107588721A (en) * | 2017-08-28 | 2018-01-16 | 武汉科技大学 | The measuring method and system of a kind of more sizes of part based on binocular vision |
CN111345023B (en) * | 2017-11-03 | 2021-07-20 | 深圳市柔宇科技股份有限公司 | Image jitter elimination method, device, terminal and computer readable storage medium |
CN111345023A (en) * | 2017-11-03 | 2020-06-26 | 深圳市柔宇科技有限公司 | Image jitter elimination method, device, terminal and computer readable storage medium |
CN108127238A (en) * | 2017-12-29 | 2018-06-08 | 南京理工大学 | The method that non-burnishing surface autonomous classification robot increases material forming |
CN108381549A (en) * | 2018-01-26 | 2018-08-10 | 广东三三智能科技有限公司 | A kind of quick grasping means of binocular vision guided robot, device and storage medium |
CN108381549B (en) * | 2018-01-26 | 2021-12-14 | 广东三三智能科技有限公司 | Binocular vision guide robot rapid grabbing method and device and storage medium |
CN108520537A (en) * | 2018-03-29 | 2018-09-11 | 电子科技大学 | A kind of binocular depth acquisition methods based on photometric parallax |
CN108453739A (en) * | 2018-04-04 | 2018-08-28 | 北京航空航天大学 | Stereoscopic vision positioning mechanical arm grasping system and method based on automatic shape fitting |
CN108596963A (en) * | 2018-04-25 | 2018-09-28 | 珠海全志科技股份有限公司 | Matching, parallax extraction and the extraction of depth information method of image characteristic point |
CN108596963B (en) * | 2018-04-25 | 2020-10-30 | 珠海全志科技股份有限公司 | Image feature point matching, parallax extraction and depth information extraction method |
CN109089100B (en) * | 2018-08-13 | 2020-10-23 | 西安理工大学 | Method for synthesizing binocular stereo video |
CN109089100A (en) * | 2018-08-13 | 2018-12-25 | 西安理工大学 | A kind of synthetic method of binocular tri-dimensional video |
CN110009610A (en) * | 2019-03-27 | 2019-07-12 | 仲恺农业工程学院 | A kind of reservoir dam slope protection surface damage visible detection method and bionic device |
CN110276110A (en) * | 2019-06-04 | 2019-09-24 | 华东师范大学 | A kind of software and hardware cooperating design method of Binocular Stereo Vision System |
CN110223257A (en) * | 2019-06-11 | 2019-09-10 | 北京迈格威科技有限公司 | Obtain method, apparatus, computer equipment and the storage medium of disparity map |
CN110543859B (en) * | 2019-09-05 | 2023-08-18 | 大连海事大学 | Sea cucumber autonomous identification and grabbing method based on deep learning and binocular positioning |
CN110543859A (en) * | 2019-09-05 | 2019-12-06 | 大连海事大学 | sea cucumber autonomous recognition and grabbing method based on deep learning and binocular positioning |
CN110672007A (en) * | 2019-09-24 | 2020-01-10 | 佛山科学技术学院 | Workpiece surface quality detection method and system based on machine vision |
CN110595309B (en) * | 2019-10-16 | 2024-03-22 | 深圳市天和时代电子设备有限公司 | Explosion venting equipment and use method thereof |
CN110595309A (en) * | 2019-10-16 | 2019-12-20 | 深圳市天和时代电子设备有限公司 | Explosive-handling equipment and using method thereof |
CN111145254B (en) * | 2019-12-13 | 2023-08-11 | 上海新时达机器人有限公司 | Door valve blank positioning method based on binocular vision |
CN111145254A (en) * | 2019-12-13 | 2020-05-12 | 上海新时达机器人有限公司 | Door valve blank positioning method based on binocular vision |
CN111563952A (en) * | 2020-03-30 | 2020-08-21 | 北京理工大学 | Method and system for realizing stereo matching based on phase information and spatial texture characteristics |
CN111563952B (en) * | 2020-03-30 | 2023-03-14 | 北京理工大学 | Method and system for realizing stereo matching based on phase information and spatial texture characteristics |
CN111429571A (en) * | 2020-04-15 | 2020-07-17 | 四川大学 | Rapid stereo matching method based on spatio-temporal image information joint correlation |
CN111429532A (en) * | 2020-04-30 | 2020-07-17 | 南京大学 | Method for improving camera calibration accuracy by utilizing multi-plane calibration plate |
CN116129037A (en) * | 2022-12-13 | 2023-05-16 | 珠海视熙科技有限公司 | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof |
CN116129037B (en) * | 2022-12-13 | 2023-10-31 | 珠海视熙科技有限公司 | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof |
CN117315033A (en) * | 2023-11-29 | 2023-12-29 | 上海仙工智能科技有限公司 | Neural network-based identification positioning method and system and storage medium |
CN117315033B (en) * | 2023-11-29 | 2024-03-19 | 上海仙工智能科技有限公司 | Neural network-based identification positioning method and system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105894499B (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105894499A (en) | Binocular-vision-based rapid detection method for three-dimensional information of space object | |
CN108416791B (en) | Binocular vision-based parallel mechanism moving platform pose monitoring and tracking method | |
CN110657785B (en) | Efficient scene depth information acquisition method and system | |
EP2728374B1 (en) | Invention relating to the hand-eye calibration of cameras, in particular depth image cameras | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
CN109712232B (en) | Object surface contour three-dimensional imaging method based on light field | |
JP7218435B2 (en) | CALIBRATION DEVICE, CALIBRATION CHART AND CALIBRATION METHOD | |
CN104268876A (en) | Camera calibration method based on partitioning | |
WO2015190616A1 (en) | Image sensor for depth estimation | |
EP2886043A1 (en) | Method for continuing recordings to detect three-dimensional geometries of objects | |
Cvišić et al. | Recalibrating the KITTI dataset camera setup for improved odometry accuracy | |
CN107084680A (en) | A kind of target depth measuring method based on machine monocular vision | |
CN103712604A (en) | Method and system for optically positioning multi-target three-dimensional space | |
CN110458952B (en) | Three-dimensional reconstruction method and device based on trinocular vision | |
CN110619660A (en) | Object positioning method and device, computer readable storage medium and robot | |
CN111798507A (en) | Power transmission line safety distance measuring method, computer equipment and storage medium | |
CN103824298A (en) | Intelligent body visual and three-dimensional positioning method based on double cameras and intelligent body visual and three-dimensional positioning device based on double cameras | |
CN112991420A (en) | Stereo matching feature extraction and post-processing method for disparity map | |
CA3233222A1 (en) | Method, apparatus and device for photogrammetry, and storage medium | |
CN115187676A (en) | High-precision line laser three-dimensional reconstruction calibration method | |
CN111862193A (en) | Binocular vision positioning method and device for electric welding spots based on shape descriptors | |
CN114511608A (en) | Method, device, terminal, imaging system and medium for acquiring depth image | |
CN111724432B (en) | Object three-dimensional detection method and device | |
CN110992463B (en) | Three-dimensional reconstruction method and system for sag of transmission conductor based on three-eye vision | |
JP7300895B2 (en) | Image processing device, image processing method, program, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180914 |
|
CF01 | Termination of patent right due to non-payment of annual fee |