CN101251379A - Real time binocular vision guidance method facing to underwater carrying vehicle - Google Patents

Real time binocular vision guidance method facing to underwater carrying vehicle Download PDF

Info

Publication number
CN101251379A
CN101251379A CNA2008100640106A CN200810064010A CN101251379A CN 101251379 A CN101251379 A CN 101251379A CN A2008100640106 A CNA2008100640106 A CN A2008100640106A CN 200810064010 A CN200810064010 A CN 200810064010A CN 101251379 A CN101251379 A CN 101251379A
Authority
CN
China
Prior art keywords
texture
pyramid
binocular vision
pixel
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008100640106A
Other languages
Chinese (zh)
Other versions
CN100554877C (en
Inventor
施小成
王晓娟
边信黔
唐照东
刘和祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CNB2008100640106A priority Critical patent/CN100554877C/en
Publication of CN101251379A publication Critical patent/CN101251379A/en
Application granted granted Critical
Publication of CN100554877C publication Critical patent/CN100554877C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a real-time binocular vision guidance method for an underwater vehicle. Aimed at the characteristics of the marine environment where the underwater vehicle operates, the invention puts forward a pyramid normalized covariance binocular vision algorithm based on the texture control to realize the real-time property of the application level with the precision stabled at the centimeter level. Regarding the final demonstration of an environmental message, the real-time binocular vision guidance method puts forward a virtual sonar model, thereby expressing the 2.5-dimensinal message as a barrier (or target) matrix based on the virtual sonar model. And the matrix contains the depth and bearing information of the barrier (or target).

Description

A kind of real-time binocular vision guidance method towards underwater research vehicle
(1) technical field
The present invention relates to a kind of method that the Azimuth ﹠ Range information of target or barrier can be provided for underwater research vehicle in real time.
(2) background technology
Along with the development of various underwater research vehicle technology, sonar (Sonar) has become most popular sensing technology in depth finding under water and the obstacle detection.But there is certain limitation in sonar in closely using, and comprises that precision is low and has problem such as blind area.And the visual sensing technology can remedy this deficiency just in in-plant superior function.It can provide the depth information of high resolving power and high precision (particularly in short range) for underwater research vehicle.In recent years, the application of vision technique in keeping away tasks such as barrier, Target Recognition, the butt joint of cabin, depressed place, cable maintenance, submarine topography modeling become the research focus.People such as the U.S. Negahdaripour of Miami University had studied location and navigational system based on vision in 1999, utilize optical flow field and water-bed natural scene to estimate the motion of underwater robot, utilize visual servo to realize that underwater robot hovers, utilize binocular vision or utilization sound vision and light vision are carried out the environments such as subsea modeling simultaneously; The binocular vision of utilizing the Starbug of Australia's CSIRO ICT center research and development has realized that submarine topography rebuilds and the AUV estimation, and has finished extra large examination in the South Pacific Ocean in 2006.Domestic research is in this respect started late, and Shanghai Communications University in 2000 uses two monocular-cameras submarine line and trouble spot are discerned and to be located, and has finished basin test.Chinese Academy of Sciences's Shenyang Institute of Automation has been studied hover location technology and carried out the pond lecture experiment in 2006 of underwater robot based on monocular vision location.Generally speaking, the control mode of these real systems is nothing more than close-loop control mode that adopts " visual servo " or the open loop control mode of " see afterwards earlier and move ".Open loop control mode realizes simple, but the precision of system acting will directly depend on the precision of visual sensing system and the execution precision of vehicle itself, and this just requires the precision of vision sensor suitably higher, will guarantee real-time simultaneously; For vision servo system, visual feedback has improved the execution precision of total system to a certain extent, but the calculating of its complexity proposes requirements at the higher level to real-time performance.At present, real system adopts special-purpose picture processing chip to improve the processing speed of biocular systems more.The present invention is directed to the characteristics of underwater research vehicle required environmental information in keeping away guiding tasks such as barrier, target recognition and tracking, a kind of real-time binocular vision guidance system is proposed on the basis of common binocular vision hardware configuration, promptly adopt the depth map that obtains target based on the pyramid normalized covariance method of texture control, and adopt virtual sonar model come reduced representation it, thereby provide real-time and effective azimuth-range information for vehicle.
Binocular vision is most important apart from cognition technology in the computing machine passive ranging method, is the research emphasis of computer vision field always.Its ultimate principle is to observe same scenery from two viewpoints, to obtain the perceptual image under different points of view, obtains the three-dimensional information of scenery by the parallax that calculates the conjugation picture point.Three-dimensional coupling is the key of binocular vision, and good stereo-picture is right for proofreading and correct, and three-dimensional matching process is determining the performance of whole biocular systems.
Three-dimensional coupling can be divided into coupling based on feature, based on the coupling of phase place with based on the relevant coupling three major types in zone.Matching process based on feature mates the feature that can represent the scenery self-characteristic selectively, emphasizes that the structural information of space scenery solves the coupling ambiguity problem.It is the dynamic programming algorithm of matching characteristic that Ohta etc. have proposed with the edge line, and this algorithm can both be mated preferably to polytype scenery, but that shortcoming is a calculated amount is big, realizes complicatedly, is difficult to practical application.Matching process based on phase place is a kind of matching process that occurs over nearly 20 years, is proposed by scholars such as Kuglin and Hines.This method has the good restraining effect for the high frequency noise of image, and geometric distortion is also had good resistivity, and can also obtain the parallax of sub-pixel precision.But there are problems such as phase place singular point, phase place coiling.Characterize pixel based on the matching process in zone with the intensity profile of image window, seek intensity profile in image pair and reach the pixel of similarity threshold value as match point, can obtain fine and close disparity map, and to image between linear luminance change and the contrast variation has very strong adaptability, so be suitable for being applied under water.But this method relies on bigger to the texture degree.When existing the part to hang down texture region, this regional matching error increases, and causes dysmetria.
(3) summary of the invention
The object of the present invention is to provide and a kind ofly can improve matching efficiency, improve the real-time of system, guarantee the real-time binocular vision guidance method towards underwater research vehicle of the available degree of accuracy of system.
The object of the present invention is achieved like this:
The image that binocular camera is obtained at first carries out gaussian filtering and treatment for correcting.Carry out depth recovery by the solid coupling again.Its three-dimensional coupling is to adopt based on the three-dimensional matching method of the pyramid normalized covariance of texture control.
The three-dimensional matching method of described pyramid normalized covariance based on texture control comprises:
(1) at the characteristics of specific tasks scene, pyramid progression, pyramid scaling, texture degree threshold value, NCC threshold value and parameters such as match window size and disparity range are carried out assignment or initialization;
(2) generate k level pyramid diagram picture by the original bottom tomographic image;
(3) begin to carry out the texture threshold test from top layer images, the solid coupling is not carried out in the zone that is lower than texture degree threshold value;
(4) the texture degree is higher than the zone of threshold value, sets up coupling and seek matrix.In disparity range, carry out two-way coupling and obtain optimal match point;
(5) carry out (3), (4) as top layer to bottom circulation from pyramid diagram, obtain disparity map;
(6) according to formula Depth = f · b d (f is a focal length, and b is a base length, and d is a parallax) recovers depth map by disparity map, promptly obtains distance.
Principal feature of the present invention is embodied in:
1 binocular vision is by calculating the range information that disparity map extracts environment.Disparity map is again to obtain by the right solid coupling of stereoscopic image.The process of three-dimensional coupling is determining the precision and the real-time of total system.The present invention adopts based on the three-dimensional matching method (Normalized cross correlation is hereinafter to be referred as NCC) of the pyramid normalized covariance of texture control, and is specific as follows:
A mistake! Do not find Reference source.Quick NCC method
Suppose image to calibrating through outer polar curve, promptly conjugate points all is positioned on the same sweep trace.The NCC method is calculated the normalized covariance value in the match window on the image of the left and right sides, with the point that reaches a certain threshold value as the candidate matches point.NCC coupling synoptic diagram is seen accompanying drawing 1.
NCC ( x , y , d ) = Σ i = - n n Σ j = - m m [ I 1 ( x + i , y + j ) - I 1 ( x , y ) ‾ ] × [ I 2 ( x . + i , y + j + d ) - I 2 ( x , y + d ) ‾ ( 2 n + 1 ) ( 2 m + 1 ) δ 2 ( I 1 ) × δ 2 ( I 2 ) - - - ( 1 )
Wherein:
I k ( x , y ) ‾ = Σ i = - n n Σ j = - m m I k ( x + i , y + j ) ( 2 n + 1 ) ( 2 m + 1 ) - - - ( 2 )
δ ( I k ) = Σ i = - n m Σ j = - m m I k 2 ( x + i , y + j ) ( 2 n + 1 ) ( 2 m + 1 ) - I k ( x , y ) ‾ - - - ( 3 )
Figure S2008100640106D00035
Be that (x, (2n+1) y) be the size of the average gray of the pixel of neighborhood (2m+1), δ (I for pixel among the image k k) be that (x, (2n+1) y) be the standard deviation of the pixel grey scale of neighborhood (2m+1) for pixel among the image k.For certain point in the image 1, on image 2, find the NCC value to form the candidate matches point set greater than the point of setting threshold.
On NCC algorithm basis, use improved quick NCC algorithm, mainly be right
Figure S2008100640106D00041
And δ (I k) carry out algorithm optimization.
I k ( x , y ) ‾ = Σ i = - n n Σ j = - m m I k ( x + i , y + i ) ( 2 n + 1 ) ( 2 m + 1 ) = S 1 ( x , y ) ( 2 n + 1 ) ( 2 m + 1 ) - - - ( 4 )
δ ( I k ) = Σ i = - n n Σ j = - m m I k 2 ( x + i , y + j ) ( 2 n + 1 ) ( 2 m + 1 ) - I k ( x , y ) ‾ = 1 ( 2 n + 1 ) ( 2 m + 1 ) S 2 ( x , y ) - S 1 ( x , y ) - - - ( 5 )
S 1(x,y)=S 1(x,y-1)+DIF 1(x-1,y)+I k(x-n,y+m+1)+
(6)
I k(x+n,y+m+1)-I k(x-n,y-m)-I k(x+n,y-m)
S 2(x,y)=S 2(x,y-1)+DIF 2(x-1,y)+I k 2(x-n,y+m+1)+
(7)
I k 2(x+n,y+m+1)-I k 2(x-n,y-m)-I k 2(x+n,y-m)
DIF 1 ( x , y ) = - Σ i = - n n ( I k ( x + i , y + m + 1 ) - I k ( x + i , y - m ) ) - - - ( 8 )
DIF 2 ( x , y ) = Σ i = - n n ( I k 2 ( x + i , y + m + 1 ) - I k 2 ( x + i , y - m ) ) - - - ( 9 )
A mistake! Do not find Reference source.The image pyramid algorithm
Pyramid algorith belongs to multiresolution category in the wavelet theory, uses very extensive in Flame Image Process.Image pyramid is the set that piece image is represented.Typical each layer of gold tower is half of preceding one deck width and height, if new one deck is structured on preceding one deck, just can form an image pyramid.Bottom image comprises maximum details, and top layer images obtains best level and smooth, and image pyramid is exactly the set of a series of multi-resolution images.
The structure pyramid need be determined pyramid progression k and scaling r step by step: pyramidal each layer is low one deck width and 1/r highly; The pixel value of certain point can be asked average acquisition with the pixel in corresponding r * r zone in the low tomographic image in the high-level diagram picture.Also can adopt the method for gaussian sum Laplce filtering to try to achieve.The present invention selects r=2, and promptly the pyramidal pixel count of each grade all reduces to 1/4 of previous stage, adopts the average value filtering method to obtain high one deck image pixel by low one deck image pixel.Synoptic diagram is seen accompanying drawing 2.
I L + 1 ( x , y ) = 1 4 [ I L ( 2 x - 1,2 y - 1 ) + I L ( 2 x - 1,2 y ) + I L ( 2 x , 2 y - 1 ) + I L ( 2 x , 2 y ) ] - - - ( 10 )
I wherein L+1(x, y) and I L(x y) is the grey scale pixel value of adjacent two layers respectively.Yi Zhi mates on low-resolution image and will save match time greatly.So multiresolution matching strategy from coarse to fine is as follows:
A mistake! Do not find Reference source.Right by original image to generating the pyramid diagram picture;
A mistake! Do not find Reference source.Begin coupling from top (the most coarse) image;
A mistake! Do not find Reference source.Utilize the matching result of higher level to instruct the coupling of lower level, so circulation is until the bottom.
A mistake! Do not find Reference source.Texture control
Because compare with the land in the seawater, the circumstance complication degree is much lower, imaging under water often contains the single background area of large tracts of land color, and the texture degree of target or barrier and background differs greatly.According to these characteristics of imaging under water, before NCC calculates, carry out texture earlier and detect.Texture is unusual general phenomenon.Texture has many different sources: at first, the image that a lot of wisps are formed is considered to texture; Secondly, look like the shape clocklike that a lot of wisps are formed on the body surface, also can think texture.Whether a general effect is called as texture, is to be determined by the yardstick of observing it.Determining that under the NCC window size prerequisite, mean value and standard deviation in the texture usable image window are represented.
The present invention adopts the standard deviation representation, and reason is that the normalized covariance method also needs to calculate this standard deviation when calculating the NCC value.Then pixel I (x, the texture degree T (I) that y) locates can be expressed as:
T ( I ) = δ ( I ) = Σ i = - n n Σ j = - m m ( x + i , y + j ) ( 2 n + 1 ) ( 2 m + 1 ) - I ( x , y ) ‾ - - - ( 11 )
Figure S2008100640106D00053
Be match window interior pixel average gray.(x when y) being lower than a certain given threshold value, illustrates that the texture degree of this window center pixel is too low, so give up this point, no longer mates as T.
Texture control is promptly carried out threshold test to texture, just only the pixel that reaches certain texture degree threshold value is mated with parallax and calculates.Because the NCC matching method is to the dependence of texture, texture is controlled to reduce and is mismatched a little, helps concentrating limited computational resource to handle the pixel that may obtain higher matching precision, thereby improves matching efficiency.In addition, when we only when interested, can greatly accelerate the extraction of object distance by suitable texture degree threshold value is set to the respective intended in the image.
A mistake! Do not find Reference source.Parallax control and two-way coupling
When vehicle was carried out certain particular task, we can do some to environment and set, and the right parallax of stereo-picture of biocular systems picked-up generally also can be confined in the definite scope so, established this scope and were (d Min, d Max), then for the pixel I of image 1 1(x y), just needn't travel through whole pixels on the same sweep trace at search matched point on the image 2, seeks the candidate matches point but only concentrate at point as follows, thereby reduces match time.
{I 2(x,y+d min),I 2(x,y+d min+1),…,I 2(x,y+d max-1),I 2(x,y+d max)}
In addition, also adopted two-way matching method to guarantee available precision.Two-way coupling has another name called consistency check or L-R check, be meant after the coupling of finishing from Fig. 1 to Fig. 2, the opposite direction of carrying out for the correctness of checking its matching result i.e. coupling from Fig. 2 to Fig. 1, when the point that positive and negative coupling that and if only if obtains is consistent, thinks that just coupling is effective.The data structure that adopts coupling to seek matrix can be accelerated two-way matching speed.The result that can directly be mated by forward by (12) reverse as can be known data of mating reads.If left and right sides picture traverse is N, highly be M, then set up coupling and seek matrix shown in (13) for i scan line.Laterally carry out on forward coupling edge, and oppositely coupling is longitudinally carried out.Need set up M such searching matrix altogether.
NCC(I 1(i,j),I 2(i,k))=NCC(I 2(i,k),I 1(i,J)) (12)
NCC ( I 1 ( i , 1 ) , I 2 ( i , 1 ) ) NCC ( I 1 ( i , 1 ) , I 2 ( i , 2 ) ) . . . NCC ( I 1 ( i , 1 ) , I 2 ( i , N ) ) NCC ( I 1 ( i , 2 ) , I 2 ( i , 1 ) ) NCC ( I 1 ( i , 2 ) , I 2 ( i , 2 ) ) . . . NCC ( I 1 ( i , 2 ) , I 2 ( i , N ) ) . . . . . . . . . . . . NCC ( I 1 ( i , N ) , I 2 ( i , 1 ) ) NCC ( I 1 ( i , N ) , I 2 ( i , 2 ) ) . . . NCC ( I 1 ( i , N ) , I 2 ( i , N ) ) - - - ( 13 )
To sum up, see accompanying drawing 3 based on the pyramid NCC coupling block diagram of texture control.
2 virtual sonar models
The industrial CCD digital camera generally all has high-resolution at present, therefore will obtain high-resolution depth map.This depth map comprises mass data, so that hinder the vehicle rapid extraction to go out useful information to take corresponding action.On the other hand, (as keeping away barrier, target following etc.) often do not need too high angular resolution (general sonar angular resolution is 1.5 °) during the guiding of vehicle was controlled in fact under water.Be the information that makes vehicle more effectively utilize depth map to comprise, the present invention proposes a kind of virtual sonar model and come the final target depth information that obtains of reduced representation vision sensor, this information is to represent with the depth value on the different azimuth of a series of angle intervals homogeneous, sees accompanying drawing 4.Because the data structure of this method for expressing and sonar is similar, therefore gain the name.
Being embodied as example with of native system describes.Employing resolution is 1280 * 1024 digital ccd video camera, and CCD is of a size of 4.65um, 47 ° of horizontal angle of releases, and about 35 ° of vertical angle of release, focal length is 8mm.In surface level be that end points is made 41 rays with the photocentre, the horizontal field of view of video camera all is divided into 42 parts, angular resolution is about 1 °.The u of these 41 rays and imaging plane image coordinate system 1Axle becomes 41 intersection points, thereby with u 1Axle is divided into 42 sections line segments.For ease of cycle calculations, the pixel count that every section line segment comprises can be approximately 30.As shown in Figure 4, the picture point of obstacle object point P drops in the 39th section line segment, shows the direction of 39 ° in video camera the place ahead of P.In like manner the vertical field of view of video camera can be divided into 34 layers, vertical angular resolution is still 1 °.Just can see accompanying drawing 5 from stereo-picture to the barrier orientation and the depth information that extract like this with one 34 * 42 matrix representation.It should be noted that the front for convenience's sake, imaging plane is moved on to photocentre the place ahead, this makes the target azimuth just opposite with actual conditions, must adjust, and method is that matrix is done the center symmetry transformation about its center.For the computing method of each grid place depth value in the accompanying drawing 5, this paper adopts minimum-depth, promptly tries to achieve the depth value of the minimum value of the degree of depth in interior all pixels of each grid as grid.May cause that so following situation takes place: when vehicle does not also enter the explosive area, take to keep away the barrier measure in advance, just cause the redundancy of keeping away Downtime.But this redundancy helps guaranteeing the safety of vehicle.At other tasks,, can adopt mean value method as Target Recognition.
This model can carry out the resolution adjustment according to the specific tasks difference.Level and vertical direction to cut apart umber many more, the resolution in orientation is also high more, in limiting case, when promptly cutting apart umber and being tending towards the CCD pixel count, resolution is the highest, the degree of depth also reaches the precision of binocular vision depth measurement.
The present invention is directed to the characteristics of the marine environment of underwater research vehicle motion, proposed a kind of pyramid normalized covariance binocular vision algorithm, reached the real-time of application level, and precision can be stabilized in centimetre-sized based on texture control.For the final expression of environmental information, proposed a kind of virtual sonar model, thereby 2.5 dimension environmental informations have been expressed as a barrier based on virtual sonar model (or target) matrix, this matrix has comprised the degree of depth and the azimuth information of barrier (or target).
(4) description of drawings
Fig. 1 is NCC coupling synoptic diagram;
Fig. 2 is multistage pyramid synoptic diagram;
Fig. 3 is the pyramid NCC coupling block diagram based on texture control;
The virtual sonar model of Fig. 4 synoptic diagram;
The generation of Fig. 5 barrier matrix;
Fig. 6 (a) is the right right figure of tsukuba image, and Fig. 6 (b) is its true disparity map;
Fig. 7 (a) is the right disparity map of tsukuba image, and Fig. 7 (b) is its NCC figure;
Fig. 8 (a) and Fig. 8 (b) are that the correcting image of example 2 is right: Fig. 8 (a) is left figure, and Fig. 8 (b) is right figure;
Fig. 9 (a) is the right disparity map of image shown in Fig. 8 (a) and Fig. 8 (b), and Fig. 9 (b) is NCC figure;
Figure 10 AUV keeps away the barrier simulation test in the space;
Figure 11 AUV keeps away the barrier curve.
(5) embodiment
The realization block diagram that utilization recovers picture depth based on the pyramid NCC method of texture control is seen accompanying drawing 3, and in conjunction with block diagram, its concrete implementation step can be expressed as:
(1) pyramid progression k, pyramid scaling r, texture degree threshold value TextureThresh, NCC threshold value, picture size m * n and parameters such as match window size WindowHeight and WindowWidth are set, disparity range upper limit d MaxAnd lower limit d MinCarry out initialization.
(2) according to scaling r, by about two width of cloth original bottom tomographic images (left side figure is that reference map, right side figure are figure subject to registration) generate two k level pyramid diagram pictures respectively, below from L=k, p=1:
(3) calculate the texture degree Texture of n capable pixel of left and right sides pyramid L layer p respectively 1s, Texture 2t(s, t=1,2 ..., n).For Texture 1sLeft pixel and the Texture of>=TextureThresh 2tThe right pixel of>=TextureThresh calculates coupling and seeks matrix.(4) at disparity range (d Min, d Max) in the capable pixel of p is carried out two-way coupling, it is right to obtain the optimum matching pixel.
(5) as if p<m, then p=p+1 gets back to (3); Otherwise, L=L-1, p=1, and disparity range is updated to (r * d Min, r * d Max).When L>0, return (3), otherwise jump out circulation, carry out (6).
(6) calculate the disparity map of entire image according to optimum matching.
(7) according to formula Range = f · b d (f is a focal length, and b is a base length, and d is a parallax) recovers depth map by disparity map, promptly obtains distance.
For example the present invention is done more detailed description below in conjunction with accompanying drawing:
Example 1:
Employing uses widely Tsukuba synthetic stereo-picture to testing.The Tsukuba stereo-picture is to overcorrect, and size is 384 * 288.Its true disparity map is shown in accompanying drawing 6 (b).Adopt pyramid progression k=4 in the test, scaling is r=2,15 * 15 match windows, NCC threshold value 0.5, disparity range are (0,40), texture threshold value 0.1, on Pentium 42.40GHz industrial computer, the process average that three-dimensional coupling the is asked parallax 164.5ms of being consuming time.The NCC value can be used to weigh matching precision.Disparity map that obtains and NCC figure are as shown in Figure 7.
Example 2:
In autonomous underwater robot (Autonomous Underwater Vehicle is hereinafter to be referred as the AUV) movement environment of simulation, test.The resolution of two industrial digital ccd video cameras is 1280 * 1024.Accurately demarcating on the basis, elder generation to proofreading and correct, adopts said method to photographic images afterwards, sets pyramid progression k=4, and scaling is r=2,21 * 21 match windows, and NCC threshold value 0.85, disparity range are (230 ,-200), texture threshold value 15.Working procedure obtains disparity map average consuming time only be 258ms, try to achieve the depth map precision thus and can reach the millimeter level on the same industrial computer.Proofread and correct the back image to seeing accompanying drawing 8, disparity map and NCC figure see accompanying drawing 9.From the simulation marine environment, extracted the barrier degree of depth effectively based on the pyramid NCC algorithm of texture control as can be seen by accompanying drawing 9, proved that the AUV visual guidance that this method is had relatively high expectations for real-time has practical value the operation time of 258ms.
Example 3:
Utilize this binocular vision guidance system, carried out the AUV space in the laboratory and kept away the barrier simulation test.Test is carried out on a four-degree-of-freedom AUV skimulated motion platform.This platform is by four high precision step motor drive, can simulate the advance and retreat of AUV in marine environment, traversing, heave and commentaries on classics head well.The whole barrier simulation system of keeping away is by binocular guidance system, obstruction-avoiding control system and topworks---and AUV skimulated motion platform three parts and peripherals are formed.
As shown in Figure 10 binocular camera is fixed in AUV motion model front end, on its walking path, places a spherical barrier and the square barrier that flat surface is arranged.Spherical barrier is positioned at the position of taking back in AUV initial position dead ahead, and square barrier is positioned at the position far away slightly that takes over, AUV dead ahead.When spherical barrier apart from AUV during less than 600mm, AUV detects it, and carries out the instruction of turning right.Square subsequently barrier enters the ken of AUV, but beyond safe distance, so AUV is still by the direction line navigation behind the corner for the first time.When enter safe distance with interior be spacing during less than 600mm, AUV carries out instruction, hides to be in the square barrier that its dead ahead takes over.Do not have barrier in the AUV ken behind the corner for the second time, therefore carry out and restore navigation.AUV goes as course and sails towards terminal point behind the corner for the third time.Can find out obviously that from accompanying drawing 11 two barriers are respectively sphere and plane towards the surface of AUV.This explanation virtual sonar model that native system adopted is effectively, and it is effective that native system hinders guiding for keeping away of AUV.

Claims (3)

1, a kind of real-time binocular vision guidance method towards underwater research vehicle is characterized in that: its three-dimensional coupling is to adopt based on the three-dimensional matching method of the pyramid normalized covariance of texture control.
2, the real-time binocular vision guidance method towards underwater research vehicle according to claim 1 is characterized in that: the three-dimensional matching method of described pyramid normalized covariance based on texture control comprises:
(1) pyramid progression k, pyramid scaling r, texture degree threshold value TextureThresh, NCC threshold value, picture size m * n and match window size WindowHeight and WindowWidth parameter are set, disparity range upper limit d MaxAnd lower limit d MinCarry out initialization;
(2) according to scaling r, by about two width of cloth original bottom tomographic images generate two k level pyramid diagram pictures respectively, below from L=k, p=1;
(3) calculate the texture degree Texture of n capable pixel of left and right sides pyramid L layer p respectively 1s, Texture 2t(s, t=1,2 ..., n), for Texture 1sLeft pixel and the Texture of>=TextureThresh 2tThe right pixel of>=TextureThresh calculates coupling and seeks matrix;
(4) at disparity range (d Min, d Max) in the capable pixel of p is carried out two-way coupling, it is right to obtain the optimum matching pixel;
(5) as if p<m, then p=p+1 gets back to (3); Otherwise, L=L-1, p=1, and disparity range is updated to (r * d Min, r * d Max); When L>0, return (3), otherwise jump out circulation, carry out (6);
(6) calculate the disparity map of entire image according to optimum matching;
(7) according to formula Range = f · b d , Wherein f is that focal length, b are that base length, d are parallax, recovers depth map by disparity map, promptly obtains distance.
3, the real-time binocular vision guidance method towards underwater research vehicle according to claim 1 and 2, it is characterized in that: come the final target depth information that obtains of reduced representation vision sensor with virtual sonar model, this information is to represent with the depth value on the position angle of a series of intervals homogeneous.
CNB2008100640106A 2008-02-19 2008-02-19 A kind of real-time binocular vision guidance method towards underwater research vehicle Expired - Fee Related CN100554877C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2008100640106A CN100554877C (en) 2008-02-19 2008-02-19 A kind of real-time binocular vision guidance method towards underwater research vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2008100640106A CN100554877C (en) 2008-02-19 2008-02-19 A kind of real-time binocular vision guidance method towards underwater research vehicle

Publications (2)

Publication Number Publication Date
CN101251379A true CN101251379A (en) 2008-08-27
CN100554877C CN100554877C (en) 2009-10-28

Family

ID=39954888

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2008100640106A Expired - Fee Related CN100554877C (en) 2008-02-19 2008-02-19 A kind of real-time binocular vision guidance method towards underwater research vehicle

Country Status (1)

Country Link
CN (1) CN100554877C (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408772B (en) * 2008-11-21 2010-09-08 哈尔滨工程大学 AUV intelligent touching-avoiding method
CN102385752A (en) * 2011-11-01 2012-03-21 清华大学深圳研究生院 Stereo matching method based on distance difference and color difference
CN104635744A (en) * 2014-12-18 2015-05-20 西北工业大学 Random coupling and multi-load laying method for autonomous underwater vehicles
CN104766312A (en) * 2015-03-27 2015-07-08 哈尔滨工程大学 Intelligent underwater robot autonomous butting method based on bi-sight-vision guiding
CN105890589A (en) * 2016-04-05 2016-08-24 西北工业大学 Underwater robot monocular vision positioning method
CN106529495A (en) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 Obstacle detection method of aircraft and device
CN106681352A (en) * 2015-11-06 2017-05-17 中国科学院沈阳自动化研究所 Underwater robot control method of rotatable rudder propeller
CN107421538A (en) * 2016-05-23 2017-12-01 华硕电脑股份有限公司 Navigation system and air navigation aid
CN108174442A (en) * 2017-12-26 2018-06-15 河海大学常州校区 A kind of underwater works crack repair robot Sensor Network position finding and detection method
CN108996268A (en) * 2018-08-01 2018-12-14 上海主线科技有限公司 A kind of container tractor based on camera and suspension bridge are mutually located method
CN109084724A (en) * 2018-07-06 2018-12-25 西安理工大学 A kind of deep learning barrier distance measuring method based on binocular vision
CN109632265A (en) * 2019-01-28 2019-04-16 上海大学 A kind of the unmanned boat water sampling device mated condition detection system and method for view-based access control model
CN112132958A (en) * 2020-09-23 2020-12-25 哈尔滨工程大学 Underwater environment three-dimensional reconstruction method based on binocular vision
US10942529B2 (en) 2016-11-24 2021-03-09 Tencent Technology (Shenzhen) Company Limited Aircraft information acquisition method, apparatus and device
CN112937486A (en) * 2021-03-16 2021-06-11 吉林大学 Vehicle-mounted online monitoring and driving assistance system and method for road accumulated water

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408772B (en) * 2008-11-21 2010-09-08 哈尔滨工程大学 AUV intelligent touching-avoiding method
CN102385752A (en) * 2011-11-01 2012-03-21 清华大学深圳研究生院 Stereo matching method based on distance difference and color difference
CN104635744B (en) * 2014-12-18 2017-06-06 西北工业大学 A kind of autonomous underwater carrier Random Coupling multi-load lays method
CN104635744A (en) * 2014-12-18 2015-05-20 西北工业大学 Random coupling and multi-load laying method for autonomous underwater vehicles
CN104766312A (en) * 2015-03-27 2015-07-08 哈尔滨工程大学 Intelligent underwater robot autonomous butting method based on bi-sight-vision guiding
CN104766312B (en) * 2015-03-27 2017-11-21 哈尔滨工程大学 A kind of autonomous docking calculation of Intelligent Underwater Robot based on binocular light vision guide
CN106681352A (en) * 2015-11-06 2017-05-17 中国科学院沈阳自动化研究所 Underwater robot control method of rotatable rudder propeller
CN106681352B (en) * 2015-11-06 2019-01-25 中国科学院沈阳自动化研究所 A kind of underwater robot control method of rotatable rudder propeller
CN105890589A (en) * 2016-04-05 2016-08-24 西北工业大学 Underwater robot monocular vision positioning method
CN107421538A (en) * 2016-05-23 2017-12-01 华硕电脑股份有限公司 Navigation system and air navigation aid
CN107421538B (en) * 2016-05-23 2020-09-11 华硕电脑股份有限公司 Navigation system and navigation method
CN106529495A (en) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 Obstacle detection method of aircraft and device
US10942529B2 (en) 2016-11-24 2021-03-09 Tencent Technology (Shenzhen) Company Limited Aircraft information acquisition method, apparatus and device
CN108174442A (en) * 2017-12-26 2018-06-15 河海大学常州校区 A kind of underwater works crack repair robot Sensor Network position finding and detection method
CN108174442B (en) * 2017-12-26 2020-02-21 河海大学常州校区 Sensor network positioning detection method for underwater structure crack repairing robot
CN109084724A (en) * 2018-07-06 2018-12-25 西安理工大学 A kind of deep learning barrier distance measuring method based on binocular vision
CN108996268A (en) * 2018-08-01 2018-12-14 上海主线科技有限公司 A kind of container tractor based on camera and suspension bridge are mutually located method
CN109632265A (en) * 2019-01-28 2019-04-16 上海大学 A kind of the unmanned boat water sampling device mated condition detection system and method for view-based access control model
CN109632265B (en) * 2019-01-28 2020-01-31 上海大学 System and method for detecting butting state of unmanned boat water collecting devices based on vision
CN112132958A (en) * 2020-09-23 2020-12-25 哈尔滨工程大学 Underwater environment three-dimensional reconstruction method based on binocular vision
CN112937486A (en) * 2021-03-16 2021-06-11 吉林大学 Vehicle-mounted online monitoring and driving assistance system and method for road accumulated water
CN112937486B (en) * 2021-03-16 2022-09-02 吉林大学 Vehicle-mounted online monitoring and driving assistance system and method for road accumulated water

Also Published As

Publication number Publication date
CN100554877C (en) 2009-10-28

Similar Documents

Publication Publication Date Title
CN100554877C (en) A kind of real-time binocular vision guidance method towards underwater research vehicle
CA2950791C (en) Binocular visual navigation system and method based on power robot
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
CN110163963B (en) Mapping device and mapping method based on SLAM
CN113658337B (en) Multi-mode odometer method based on rut lines
Aykin et al. On feature extraction and region matching for forward scan sonar imaging
Lui et al. Eye-full tower: A gpu-based variable multibaseline omnidirectional stereovision system with automatic baseline selection for outdoor mobile robot navigation
CN114782626A (en) Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
Shin et al. Bundle adjustment from sonar images and SLAM application for seafloor mapping
US20240013505A1 (en) Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime
CN112734839A (en) Monocular vision SLAM initialization method for improving robustness
Concha et al. Real-time localization and dense mapping in underwater environments from a monocular sequence
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
Wang et al. Monocular visual SLAM algorithm for autonomous vessel sailing in harbor area
Snyder et al. Autonomous river navigation
Yue et al. LiDAR data enrichment using deep learning based on high-resolution image: An approach to achieve high-performance LiDAR SLAM using low-cost LiDAR
Yin et al. Study on underwater simultaneous localization and mapping based on different sensors
Ferreira et al. A real-time mosaicking algorithm using binary features for ROVs
Sergiyenko et al. Machine vision sensors
Yang et al. Image Based River Navigation System of Catamaran USV with Image Semantic Segmentation
Lv et al. Absolute scale estimation of orb-slam algorithm based on laser ranging
Aggarwal Machine vision based SelfPosition estimation of mobile robots
CN111798496B (en) Visual locking method and device
Wu et al. Research progress of obstacle detection based on monocular vision
Hou et al. Real-time Underwater 3D Reconstruction Method Based on Stereo Camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091028

Termination date: 20120219