CN112419410B - Horizontal attitude determination method based on underwater Snell window edge identification - Google Patents

Horizontal attitude determination method based on underwater Snell window edge identification Download PDF

Info

Publication number
CN112419410B
CN112419410B CN202011307276.6A CN202011307276A CN112419410B CN 112419410 B CN112419410 B CN 112419410B CN 202011307276 A CN202011307276 A CN 202011307276A CN 112419410 B CN112419410 B CN 112419410B
Authority
CN
China
Prior art keywords
pixel
radius
snell
coordinate system
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011307276.6A
Other languages
Chinese (zh)
Other versions
CN112419410A (en
Inventor
杨健
胡鹏伟
郭雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202011307276.6A priority Critical patent/CN112419410B/en
Publication of CN112419410A publication Critical patent/CN112419410A/en
Application granted granted Critical
Publication of CN112419410B publication Critical patent/CN112419410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a horizontal attitude determination method based on underwater Snell window edge identification. The snell window is an optical phenomenon during underwater space observation and contains spatial information. The method extracts the center of a Snell window in underwater aerial imaging by an image processing means to obtain the horizontal attitude of a carrier. Firstly, carrying out noise filtering and binarization processing on an acquired image; then introducing a lens model of the camera into a traditional Hough circle transformation algorithm, and extracting a Snell window and the center thereof from a binary edge image; and finally, resolving the horizontal attitude of the carrier by utilizing the relation because the window center vector is superposed with the zenith vector under the navigation coordinate system. The method fully utilizes underwater optical information, realizes the determination of fully autonomous horizontal attitude, and can be used for the initial alignment of the underwater vehicle and the correction of inertial navigation accumulated errors.

Description

Horizontal attitude determination method based on underwater Snell window edge identification
Technical Field
The invention relates to a horizontal attitude determination method based on underwater Snell window edge identification, which utilizes underwater sky imaging and obtains a carrier horizontal attitude by extracting the center of a Snell window in an image through improved Hough transformation, and belongs to the field of underwater optical navigation.
Background
Oceans store abundant biological resources, mineral resources and space resources and have great development potential, so that oceans are used as an important part in development strategies in all countries. The underwater unmanned underwater vehicle plays an irreplaceable role in the development of the sea. The method can be used for submarine landform detection, ocean engineering construction, ocean monitoring, ocean rescue and salvage and the like for civil use, can be used for military tasks such as underwater monitoring, reconnaissance, mine hunting, anti-submarine warning, communication relay and the like for national defense, and has the advantages of low manufacturing cost, high survival rate, capability of avoiding personal casualties and great development prospect. The autonomous navigation technology of the underwater unmanned underwater vehicle is a bottleneck problem which restricts the autonomous unmanned underwater vehicle. The autonomous navigation system is used as the 'ear-eye' of the unmanned submersible vehicle, ensures the smooth execution and safe return of underwater tasks of the submersible vehicle, and is a key technology in the underwater unmanned system.
Underwater acoustic positioning and satellite navigation in the current underwater navigation mode are widely applied, for example, an underwater acoustic positioning system is utilized in Chinese patent CN201410073287.0, Chinese patent CN 201510944118.4 and Chinese patent CN 201610351400.6; the Chinese patent CN 201010559044.X, the Chinese patent CN 201210332022.9, the Chinese patent CN 201810046278.0 and the Chinese patent CN 201410032817.7 utilize a satellite navigation mode. However, the two navigation modes are non-autonomous navigation, wherein underwater acoustic positioning needs to preset an underwater transponder, and the method cannot be applied to a remote and strange environment; the satellite navigation mode can only receive satellite signals when the underwater vehicle floats to the water surface, so that the continuity of the operation of the underwater vehicle is influenced, and particularly in military application, the concealment of the underwater vehicle is seriously influenced by the two navigation modes. In chinese patent CN 109443379 a and chinese patent CN 109141475a, inertial navigation and doppler mileage instruments are combined to determine the initial attitude of the carrier, however, the doppler mileage instruments have certain requirements on the distance between the submersible vehicle and the seabed, and the quality of output speed information is greatly influenced by seawater temperature, salinity, etc., so the application scenarios and performance are limited.
Disclosure of Invention
The invention solves the problems: the method overcomes the defects of the prior art, provides a horizontal attitude determination method based on underwater Snell window edge identification, and can realize the full-autonomous determination of the horizontal attitude angle without accumulated errors in an underwater environment.
The invention provides the pitch angle and the roll angle for the carrier independently by acquiring the external optical information. The Snell window is a high-brightness circular area formed at the water-air interface due to refraction when underwater air is imaged. In the imaging system, the observation vector of the circle center is vertical to the water surface upwards, namely the sky direction under the navigation coordinate system. Because the light inside the Snell window is formed by refracting sky light and has higher brightness relative to the outside, the Snell window in the null imaging has more obvious edges under the condition of a flat still water surface, and the window edges are conveniently extracted by using an image processing method so as to obtain the circle center of the Snell window. By integrating the two points, the invention provides a method for calculating the horizontal attitude of the carrier by extracting the circle center of a Snell window in the empty imaging.
The circle detection in the image is usually performed by using the hough circle transform, however, due to the distortion of the lens in the imaging system, when the underwater carrier is placed obliquely, the snell window in the image is not in a regular circle shape, so the conventional hough circle transform is not applicable any more. Aiming at the problem, the method is improved on the basis of Hough circle transformation, and a camera lens model is added, so that a Snell window and the circle center under the distortion of a camera lens can be accurately extracted.
The technical scheme adopted by the invention for solving the technical problems is as follows: a horizontal attitude determination method based on underwater Snell window edge identification comprises the following implementation steps:
step (1), carrying out noise filtering and binarization processing on the acquired image, obtaining an edge image according to light intensity information in the image, and extracting coordinates [ m, n ] of a non-zero pixel p in the edge image in a pixel coordinate system]. Where any non-zero pixel at any point is denoted as piWith the coordinate of [ mi,ni]Wherein i is 1,2, …, s, s is the number of nonzero pixels;
step (2), calculating a value set of the radius r of the included angle, wherein r is a possible value of the radius of the view angle of a Snell window, establishing a three-dimensional Hough space H (m, n, r), and setting all elements in the space to zero;
step (3) utilizing the edge image subjected to the binarization processing in the step (1) to convert each non-zero pixel p intoiThe observation vector under the world coordinate system, namely the w system, corresponding to the camera lens model theta is inverted through the camera lens model theta, and the observation vector is used as the z axis to establish the pixel local coordinate system, namely the l systemiSolving the non-zero pixel piCorresponding to liCoordinate transformation matrix between system and w system
Figure BDA0002788654250000021
Step (4) with a non-zero pixel piDetermining a cone set by using the included angle radius r, the redundancy radius delta alpha and the radius stepping delta calculated in the step (2) and taking all the observation vectors of the cone set as an axisGeneratrix vector giReverting to the pixel coordinate system, at each generatrix vector giqVoting in a three-dimensional Hough space H (m, n, r) of corresponding pixel coordinates, wherein
Figure BDA0002788654250000022
To obtain non-zero pixel point piA three-dimensional Hough space as a circle center;
and (5) performing the operations of the step (3) and the step (4) on all non-zero pixels, wherein the pixel point with the highest ticket number in the three-dimensional Hough space H (m, n, r) is the circle center of a Snell window, calculating an observation vector of the pixel coordinate of the circle center under a world coordinate system, and inverting the horizontal posture.
Further, the radius r of the included angle calculated in the step (2) is a value set, where r is a possible value of the radius of the view angle of the snell window, and a three-dimensional hough space H (m, n, r) is established, and all elements in the space are set to zero, which is specifically realized as follows:
determining an angle between the edge and the center of the window according to a Snell's law of refraction and a lens model, taking the angle as an included angle radius in Hough circle transformation, and determining a redundant range of the included angle radius and radius stepping; the refractive indexes of atmosphere and water are respectively naAnd nwThen the angle α between the edge and the center of the snell window satisfies:
Figure BDA0002788654250000031
considering the errors of refractive indexes of a camera and a medium in practical application, the imaging size of a Snell window can not strictly accord with the theoretical model of the formula, so that the value range of the included angle radius r is [ alpha-delta alpha, alpha + delta alpha ] when the redundant radius delta alpha is set according to the practical water body environment and the lens model. Then, setting the included angle radius step as delta, and setting the value set of the included angle radius r as:
r∈{α-Δα,α-Δα+δ,α-Δα+2δ,…,α-Δα+kδ}(α-Δα+kδ≤α+Δα,k∈N+)
and establishing a three-dimensional Hough space H (m, n, r), wherein (m, n) is pixel coordinates, and initial values of the three-dimensional Hough space are all set to be 0.
Further, the step (3) uses the edge image binarized in the step (1) to convert each non-zero pixel p intoiThe observation vector under the world coordinate system, namely the w system, corresponding to the camera lens model theta is inverted through the camera lens model theta, and the observation vector is used as the z axis to establish the pixel local coordinate system, namely the l systemiSolving the non-zero pixel piCorresponding to liCoordinate transformation matrix between system and w system
Figure BDA0002788654250000032
The concrete implementation is as follows:
in the pixel coordinate system, an arbitrary point is a nonzero pixel piHas the coordinate of [ mi,ni]Through camera lens calibration, a camera lens model theta (p) can be calibrated, and a pixel point piCorresponding incident ray propagation vector kappa under system wiExpressed as:
κi=Θ(pi) (16)
κiis a unit vector, an observation vector v in the directioniIs represented by in the w system
Figure BDA0002788654250000033
Order to
Figure BDA0002788654250000034
Then there are:
[ai bi ci]T=-Θ(pi) (17)
with viEstablishing a local coordinate system of picture elements, i.e./for the z-axisiIs, definition w is andiin the process of conversion between systems, the rotation angles around the x-axis and the y-axis are respectively omegaxi,
Figure BDA0002788654250000035
The rotation angle around the z-axis is 0, yielding liThe coordinate transformation matrix from system to w system is:
Figure BDA0002788654250000036
due to the observation vector viIn liIs represented as
Figure BDA0002788654250000041
Thus, the following steps are obtained:
Figure BDA0002788654250000042
combined formula (17), omegaxiAnd omegayiThe trigonometric function form of (a) can be expressed as follows:
Figure BDA0002788654250000043
thus, the method can be obtained by using any non-zero pixel point p on the imageiCorresponding observation vector viLocal coordinate system l of picture element as z-axisiThere is a coordinate transformation matrix with the world coordinate system of the camera as:
Figure BDA0002788654250000044
further, the step (4) is to use a non-zero pixel piDetermining a cone set by using the included angle radius r, the redundancy radius delta alpha and the radius stepping delta calculated in the step (2) and taking all generatrix vectors g of the cone set as an axisiReverting to the pixel coordinate system, at each generatrix vector giqVoting in a three-dimensional Hough space H (m, n, r) of corresponding pixel coordinates, wherein
Figure BDA0002788654250000045
To obtain non-zero pixel point piThe three-dimensional Hough space with the circle center is specifically realized as follows:
establishing random non-zero pixel point piCorresponding observation vector viAs an axis, and the included angle radius r is a cone angle half angleOf a circular cone of which the circumferential step is τ, one of the generatrix vectors g of the coneiqIn liIs represented as follows:
Figure BDA0002788654250000046
wherein
Figure BDA0002788654250000047
Then the generatrix vector giqCan be calculated from the following formula under the w system:
Figure BDA0002788654250000051
then, the model of the camera lens is used to calculate
Figure BDA0002788654250000052
Imaging pixel coordinates on the image of the incident ray in the direction:
[miq,niq]=round(Θ-1(-gi w)) (24)
where round () denotes rounding all the elements of () when point (m) is in the corresponding three-dimensional hough space H (m, n, r)iq,niqAnd r) voting once, namely accumulating the point value by 1, traversing q and r to obtain a non-zero pixel point piA three-dimensional Hough space as a circle center.
Further, in the step (5), the operations in the steps (3) and (4) are performed on all non-zero pixels, a pixel point with the highest ticket number in the three-dimensional hough space H (m, n, r) is the center of a snell window, an observation vector of a pixel coordinate of the center of the circle under a world coordinate system is solved, and a horizontal attitude is inverted, and the method is specifically realized as follows:
performing the operation of the step (3) and the operation of the step (4) on all non-zero pixel points in the binary edge image, voting and accumulating in the same three-dimensional Hough space H (m, n, r), and selecting a point (m, n, r) corresponding to the maximum value from the three-dimensional Hough space H (m, n, r)0,n0,r0) I.e. (m)0,n0,r0) argmaxH (m, n, r), then [ m0,n0]Is the coordinate of the center of the Snell window in the pixel coordinate system, r0The radius of the included angle of the Snell window, namely the actual included angle between the edge of the Snell and the center of the window;
zeta of observation vector of w system corresponding to central pixel of Snell windowwThe following are calculated by a camera lens model:
ζw=Θ([m0,n0]) (25)
the horizontal attitude of the carrier is determined by a roll angle gamma and a pitch angle theta, and under the condition of not considering a course angle, a carrier coordinate system, namely a b system, and a navigation coordinate system, namely an n system, and a coordinate conversion matrix are as follows:
Figure BDA0002788654250000053
since the w system, which is the world coordinate system of the camera, completely overlaps the b system, which is the carrier coordinate system, ζ isw=ζbSince the center of the Snell window is vertically upward under the n series, ζ isn=[0 0 1]TAccording to the coordinate transformation relationship, the following are provided:
Figure BDA0002788654250000054
the following formula is solved: zetaw=[-cosθsinγ sinθ cosθcosγ]TFrom this, the carrier pitch angle θ and roll angle γ can be calculated as:
Figure BDA0002788654250000061
therein, ζw(. indicates) vector ζwThe central element. Thus, the horizontal posture is determined.
Compared with the prior art, the invention has the following advantages: the existing underwater carrier attitude determination is mostly based on a combination mode of inertial navigation and other navigation modes or other navigation modes, and has the defects of error accumulation or non-autonomy and the like. The horizontal attitude determination method based on the Snell window is a method for solving space information through optical information in nature, can acquire absolute information of the horizontal attitude of a carrier in real time in a fully autonomous manner, has no error accumulation, and provides a new idea for underwater autonomous navigation.
Drawings
FIG. 1 is a flow chart of a horizontal attitude determination method based on underwater Snell window edge identification according to the present invention;
FIG. 2 is a schematic view of a Snell window;
fig. 3 is a schematic diagram of the spatial transformation relationship of the coordinate systems according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.
As shown in fig. 1, the horizontal attitude determination method based on underwater snell window edge identification of the present invention specifically includes the following steps:
step 1, carrying out noise filtering and binarization processing on the acquired image, obtaining an edge image according to light intensity information in the image, and extracting coordinates [ m, n ] of a non-zero pixel p in the edge image in a pixel coordinate system]. Where any non-zero pixel at any point is denoted as piWith the coordinate of [ mi,ni]And i is 1,2, …, s, s is the number of nonzero pixels.
And 2, calculating a value set of the radius r of the included angle, wherein r is a possible value of the radius of the view angle of a Snell window, establishing a three-dimensional Hough space H (m, n, r), and setting all elements in the space to zero. The initialization of the included angle radius r is as follows:
as shown in FIG. 2, the atmospheric view angle is approximately oneThe image in the celestial sphere is compressed into an inverted cone-shaped visual angle taking an underwater observer as a vertex under the action of refraction of the water surface, and the bottom of the cone is a circular area on the water surface. In an empty observation image of an underwater observer, the circular area has higher brightness relative to the area outside the circle. And determining the angle between the edge and the center of the window according to the Snell's law of refraction and a lens model, taking the angle as the radius of an included angle in Hough circle transformation, and determining the redundant range of the radius of the included angle and radius stepping. The refractive indexes of atmosphere and water are respectively naAnd nwThen the angle α between the edge and the center of the snell window satisfies:
Figure BDA0002788654250000071
considering the errors of refractive indexes of a camera and a medium in practical application, the imaging size of a Snell window can not strictly accord with the theoretical model of the formula, so that the value range of the included angle radius r is [ alpha-delta alpha, alpha + delta alpha ] when the redundant radius delta alpha is set according to the practical water body environment and the lens model. Then, setting the included angle radius step as delta, and setting the value set of the included angle radius r as:
r∈{α-Δα,α-Δα+δ,α-Δα+2δ,…,α-Δα+kδ}(α-Δα+kδ≤α+Δα,k∈N+)
and establishing a three-dimensional Hough space H (m, n, r), wherein (m, n) is pixel coordinates, and initial values of the three-dimensional Hough space are all set to be 0.
Step 3, utilizing the edge image subjected to the binarization processing in the step 1 to divide each nonzero pixel p into a plurality of non-zero pixelsiThe observation vector under the world coordinate system, namely the w system, corresponding to the camera lens model theta is inverted through the camera lens model theta, and the observation vector is used as the z axis to establish the pixel local coordinate system, namely the l systemiSolving the non-zero pixel piCorresponding to liCoordinate transformation matrix between system and w system
Figure BDA0002788654250000072
The concrete implementation is as follows:
as shown in FIG. 3, in the pixel coordinate system, a non-zero pixel point piHas the coordinate of [ mi,ni]. Through camera lens calibration, a camera lens model Θ (p) can be calibrated. Then pixel point piCorresponding incident ray propagation vector kappa under system wiCan be expressed as:
κi=Θ(pi) (30)
κiis a unit vector, an observation vector v in the directioniIs represented by in the w system
Figure BDA0002788654250000073
Order to
Figure BDA0002788654250000074
Then there are:
[ai bi ci]T=-Θ(pi) (31)
with viEstablishing a local coordinate system of picture elements, i.e./for the z-axisiIs, definition w is andiin the process of conversion between systems, the rotation angles around the x-axis and the y-axis are respectively omegaxi,
Figure BDA0002788654250000075
The rotation angle around the z-axis is 0, then w is l corresponding to the pixeliCoordinate transformation matrix between systems
Figure BDA0002788654250000076
Can be expressed as the angle of 3-1-2 euler:
Figure BDA0002788654250000077
then l can be obtainediThe coordinate transformation matrix from system to w system is:
Figure BDA0002788654250000078
due to the observation vector viIn liIs represented as
Figure BDA0002788654250000081
Thus, the following steps are obtained:
Figure BDA0002788654250000082
combined formula (31), omegaxiAnd omegayiThe trigonometric function form of (a) can be expressed as follows:
Figure BDA0002788654250000083
thus, the method can be obtained by using any non-zero pixel point p on the imageiCorresponding observation vector viLocal coordinate system l of picture element as z-axisiThere is a coordinate transformation matrix with the world coordinate system of the camera as:
Figure BDA0002788654250000084
step 4, a non-zero pixel p is usediDetermining a cone set by using the included angle radius r, the redundancy radius delta alpha and the radius stepping delta calculated in the step (2) and taking all generatrix vectors g of the cone set as an axisiReverting to the pixel coordinate system, at each generatrix vector giqVoting in a three-dimensional Hough space H (m, n, r) of corresponding pixel coordinates, wherein
Figure BDA0002788654250000085
To obtain non-zero pixel point piThe three-dimensional Hough space with the circle center is specifically realized as follows:
establishing a non-zero pixel point piCorresponding observation vector viThe included angle radius r is used as a cone of a cone angle half angle. The circumference is stepped by τ. One of the generatrix vectors g of the coneiqIn liIs represented as follows:
Figure BDA0002788654250000086
wherein
Figure BDA0002788654250000087
Then the generatrix vector giqCan be calculated from the following formula under the w system:
Figure BDA0002788654250000091
then, the model of the camera lens is used to calculate
Figure BDA0002788654250000092
Imaging pixel coordinates on the image of the incident ray in the direction:
[miq,niq]=round(Θ-1(-gi w)) (39)
where round () denotes rounding all the elements of () to the whole. At this time, the midpoint (m) in the corresponding three-dimensional Hough space H (m, n, r)iq,niqAnd r) voting once, namely accumulating 1 by the point value. Traversing q and r to obtain non-zero pixel point piA three-dimensional Hough space as a circle center.
And 5, performing the operations of the step 3 and the step 4 on all non-zero pixels, taking the pixel point with the highest ticket number in the three-dimensional Hough space H (m, n, r) as the circle center of a Snell window, calculating an observation vector of the pixel coordinate of the circle center under a world coordinate system, and inverting the horizontal posture. The concrete implementation is as follows:
and (4) performing the operations of the step (3) and the step (4) on all non-zero pixel points in the binary edge image, and voting and accumulating in the same three-dimensional Hough space H (m, n, r). Then the circles with the non-zero pixel points at the edge of the snell window as the center of the circle intersect at the center of the snell window, i.e. the center coordinate of the snell window gets the most vote in the three-dimensional hough space. Therefore, the point (m) corresponding to the maximum value is selected from the three-dimensional Hough space H (m, n, r)0,n0,r0) I.e. (m)0,n0,r0) argmaxH (m, n, r). Then [ m0,n0]Is a Snell windowThe coordinates of the center of the mouth in the pixel coordinate system, r0Is the radius of the included angle of the Snell window, namely the actual included angle between the edge of the Snell and the center of the window.
Zeta of observation vector of w system corresponding to central pixel of Snell windowwCan be calculated by a camera lens model to obtain:
ζw=Θ([m0,n0]) (40)
the horizontal attitude of the carrier is determined by the roll angle gamma and the pitch angle theta. In the case of not considering the heading angle, the carrier coordinate system (b system) and the navigation coordinate system (n system) have the coordinate transformation matrix as follows:
Figure BDA0002788654250000093
since the world coordinate system (w system) of the camera and the carrier coordinate system (b system) completely coincide with each other, ζ is obtainedw=ζb. Since the center of the Snell window is vertically upward under the n series, ζ isn=[0 0 1]TAccording to the coordinate transformation relationship, the following are provided:
Figure BDA0002788654250000101
the following formula is solved: zetaw=[-cosθsinγ sinθ cosθcosγ]TThe carrier pitch angle θ and roll angle γ can thus be calculated as:
Figure BDA0002788654250000102
therein, ζw(. indicates) vector ζwThe central element. Thus, the horizontal posture is determined.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.

Claims (4)

1. A horizontal attitude determination method based on underwater Snell window edge identification is characterized by comprising the following steps:
step (1), carrying out noise filtering and binarization processing on the acquired image, obtaining an edge image according to light intensity information in the image, and extracting coordinates [ m, n ] of a non-zero pixel p in the edge image in a pixel coordinate system]Where any non-zero pixel at any point is denoted as piWith the coordinate of [ mi,ni]Wherein i is 1,2, …, s, s is the number of nonzero pixels;
step (2), calculating a value set of included angle radius r, wherein r is a possible value of the radius of a view angle of a Snell window, establishing a three-dimensional Hough space H (m, n, r), and setting all elements in the space to zero;
step (3) utilizing the edge image subjected to the binarization processing in the step (1) to convert each non-zero pixel p intoiThe observation vector under the world coordinate system, namely the w system, corresponding to the camera lens model theta is inverted through the camera lens model theta, and the observation vector is used as the z axis to establish the pixel local coordinate system, namely the l systemiSolving the non-zero pixel piCorresponding to liCoordinate transformation matrix between system and w system
Figure FDA0003222203420000011
Step (4) with a non-zero pixel piDetermining a cone set by using the included angle radius r, the redundancy radius delta alpha and the radius stepping delta calculated in the step (2) and taking all generatrix vectors g of the cone set as an axisiReverting to the pixel coordinate system, at each generatrix vector giqVoting in a three-dimensional Hough space H (m, n, r) of corresponding pixel coordinates, wherein
Figure FDA0003222203420000012
To obtain non-zero pixel point piA three-dimensional Hough space which is the circle center, wherein tau is a circumferential step;
step (5), performing the operations of the step (3) and the step (4) on all non-zero pixels, wherein the pixel point with the highest ticket number in the three-dimensional Hough space H (m, n, r) is the circle center of a Snell window, calculating an observation vector of the pixel coordinate of the circle center under a world coordinate system, and inverting the horizontal posture; the concrete implementation is as follows:
performing the operation of the step (3) and the operation of the step (4) on all non-zero pixel points in the binary edge image, voting and accumulating in the same three-dimensional Hough space H (m, n, r), and selecting a point (m, n, r) corresponding to the maximum value from the three-dimensional Hough space H (m, n, r)0,n0,r0) I.e. (m)0,n0,r0) argmaxH (m, n, r), then [ m0,n0]Is the coordinate of the center of the Snell window in the pixel coordinate system, r0The radius of the included angle of the Snell window, namely the actual included angle between the edge of the Snell and the center of the window;
zeta of observation vector of w system corresponding to central pixel of Snell windowwThe following are calculated by a camera lens model:
ζw=Θ([m0,n0]) (11)
the horizontal attitude of the carrier is determined by a roll angle gamma and a pitch angle theta, and under the condition of not considering a course angle, a carrier coordinate system, namely a b system, and a navigation coordinate system, namely an n system, and a coordinate conversion matrix are as follows:
Figure FDA0003222203420000021
w system as a world coordinate system of the camera and b system as a carrier coordinate system are completely overlapped, ζw=ζbAnd the center of the Snell window is vertical upward under n series, then ζn=[0 0 1]TAccording to the coordinate transformation relationship, the following are provided:
Figure FDA0003222203420000022
the following formula is solved: zetaw=[-cosθsinγ sinθ cosθcosγ]TFrom this, the carrier pitch angle θ and roll angle γ can be calculated as:
Figure FDA0003222203420000023
therein, ζw(. indicates) vector ζwAnd (5) determining the horizontal posture by the first element.
2. The horizontal attitude determination method based on underwater snell window edge identification as claimed in claim 1, wherein:
calculating a value set of included angle radius r, wherein r is a possible value of the radius of a view angle of a Snell window, establishing a three-dimensional Hough space H (m, n, r), and setting all elements in the space to zero, and the method is specifically realized as follows:
determining an angle between the edge and the center of the window according to a Snell's law of refraction and a lens model, taking the angle as an included angle radius in Hough circle transformation, and determining a redundant range of the included angle radius and radius stepping; the refractive indexes of atmosphere and water are respectively naAnd nwThen the angle α between the edge and the center of the snell window satisfies:
Figure FDA0003222203420000024
considering the refractive index errors of a camera and a medium in practical application, the imaging size of a Snell window can not strictly accord with the theoretical model of the formula, so that the redundant radius delta alpha is set according to the practical water body environment and the lens model, the value range of the included angle radius r is [ alpha-delta alpha, alpha + delta alpha ], the included angle radius step is set to delta, and the value set of the included angle radius r is as follows:
and r belongs to { alpha-delta alpha, alpha-delta alpha + delta, alpha-delta alpha +2 delta, …, alpha-delta alpha + k delta }, wherein alpha-delta alpha + k delta is less than or equal to alpha + delta alpha, and k belongs to N +, so as to establish a three-dimensional Hough space H (m, N, r), wherein (m, N) is pixel coordinates, and initial values of the three-dimensional Hough space are set to be 0.
3. The horizontal attitude determination method based on underwater snell window edge identification as claimed in claim 1, wherein:
in the step (3), each non-zero pixel p in the edge image subjected to the binarization processing in the step (1) is usediThe observation vector under the world coordinate system, namely the w system, corresponding to the camera lens model theta is inverted through the camera lens model theta, and the observation vector is used as the z axis to establish the pixel local coordinate system, namely the l systemiSolving the non-zero pixel piCorresponding to liCoordinate transformation matrix between system and w system
Figure FDA0003222203420000031
The concrete implementation is as follows:
in the pixel coordinate system, an arbitrary point is a nonzero pixel piHas the coordinate of [ mi,ni]Through camera lens calibration, a camera lens model theta (p) can be calibrated, and a pixel point piCorresponding incident ray propagation vector kappa under system wiExpressed as:
κi=Θ(pi) (2)
κias unit vector, observation vector viIs represented by in the w system
Figure FDA0003222203420000032
Order to
Figure FDA0003222203420000033
Then there are:
[ai bi ci]T=-Θ(pi) (3)
with viEstablishing a local coordinate system of picture elements, i.e./for the z-axisiIs, definition w is andiin the process of conversion between systems, the rotation angles around the x-axis and the y-axis are respectively omegaxiyiWherein
Figure FDA0003222203420000034
The rotation angle around the z-axis is 0, yielding liThe coordinate transformation matrix from system to w system is:
Figure FDA0003222203420000035
observation vector viIn liIs represented as
Figure FDA0003222203420000036
Obtaining:
Figure FDA0003222203420000037
combined formula (3), omegaxiAnd omegayiThe trigonometric function form of (a) can be expressed as follows:
Figure FDA0003222203420000038
thus, any non-zero pixel point p on the image is obtainediCorresponding observation vector viLocal coordinate system l of picture element as z-axisiThere is a coordinate transformation matrix with the world coordinate system of the camera as:
Figure FDA0003222203420000039
4. the horizontal attitude determination method based on underwater snell window edge identification as claimed in claim 3, wherein:
the step (4) is to use a non-zero pixel piUsing the calculated included angle radius r, redundancy radius delta alpha and radius stepping delta in the step (2) as an axisDetermining a cone set, and using all generatrix vectors g of the cone setiReverting to the pixel coordinate system, at each generatrix vector giqVoting in a three-dimensional Hough space H (m, n, r) of corresponding pixel coordinates, wherein
Figure FDA0003222203420000041
To obtain non-zero pixel point piThe three-dimensional Hough space with the circle center is specifically realized as follows:
establishing random non-zero pixel point piCorresponding observation vector viTaking the included angle radius r as a cone angle half-angle cone as an axis, the circumferential step is tau, and one generatrix vector g of the coneiqIn liIs represented as follows:
Figure FDA0003222203420000042
wherein
Figure FDA0003222203420000043
Then the generatrix vector giqCan be calculated from the following formula under the w system:
Figure FDA0003222203420000044
then, the model of the camera lens is used to calculate
Figure FDA0003222203420000045
Imaging pixel coordinates on the image of the incident ray in the direction:
Figure FDA0003222203420000046
where round () denotes rounding all the elements of () when point (m) is in the corresponding three-dimensional hough space H (m, n, r)iq,niqR) voting oneSecond, that is, the point (m)iq,niqR) is accumulated to 1, q and r are traversed to obtain non-zero pixel point piA three-dimensional Hough space as a circle center.
CN202011307276.6A 2020-11-20 2020-11-20 Horizontal attitude determination method based on underwater Snell window edge identification Active CN112419410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011307276.6A CN112419410B (en) 2020-11-20 2020-11-20 Horizontal attitude determination method based on underwater Snell window edge identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011307276.6A CN112419410B (en) 2020-11-20 2020-11-20 Horizontal attitude determination method based on underwater Snell window edge identification

Publications (2)

Publication Number Publication Date
CN112419410A CN112419410A (en) 2021-02-26
CN112419410B true CN112419410B (en) 2021-10-19

Family

ID=74774315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011307276.6A Active CN112419410B (en) 2020-11-20 2020-11-20 Horizontal attitude determination method based on underwater Snell window edge identification

Country Status (1)

Country Link
CN (1) CN112419410B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114877898B (en) * 2022-07-11 2022-10-18 北京航空航天大学 Sun dynamic tracking method based on underwater polarization attitude and refraction coupling inversion

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106225668B (en) * 2016-07-27 2018-11-09 大连理工大学 Wind-tunnel missile high speed pose measuring method based on more refraction models
CN107767420B (en) * 2017-08-16 2021-07-23 华中科技大学无锡研究院 Calibration method of underwater stereoscopic vision system
CN108387206B (en) * 2018-01-23 2020-03-17 北京航空航天大学 Carrier three-dimensional attitude acquisition method based on horizon and polarized light
CN108535715A (en) * 2018-04-12 2018-09-14 西安应用光学研究所 A kind of seen suitable for airborne photoelectric takes aim at object localization method under the atmospheric refraction of system
SE542553C2 (en) * 2018-12-17 2020-06-02 Tobii Ab Gaze tracking via tracing of light paths
CN111220150B (en) * 2019-12-09 2021-09-14 北京航空航天大学 Sun vector calculation method based on underwater polarization distribution mode

Also Published As

Publication number Publication date
CN112419410A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN110926474B (en) Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
CN103822635B (en) The unmanned plane during flying spatial location real-time computing technique of view-based access control model information
CN111220150B (en) Sun vector calculation method based on underwater polarization distribution mode
CN105865454B (en) A kind of Navigation of Pilotless Aircraft method generated based on real-time online map
CN102353377B (en) High altitude long endurance unmanned aerial vehicle integrated navigation system and navigating and positioning method thereof
US9285460B2 (en) Method and system for estimating information related to a vehicle pitch and/or roll angle
CN111412916B (en) Astronomical navigation ship position calculation method based on atmospheric polarized light field
CN103954283A (en) Scene matching/visual odometry-based inertial integrated navigation method
CN103398710B (en) Entering and leaving port, naval vessel navigational system under a kind of night fog sky condition and construction method thereof
CN110487266A (en) A kind of airborne photoelectric passive high-precision localization method suitable for sea-surface target
CN111307139A (en) Course and attitude determination method based on polarization/astronomical information fusion
CN104618689A (en) Method and system for monitoring offshore oil spillage based on UAV
CN105180943A (en) Ship positioning system and ship positioning method
CN106767822A (en) Indoor locating system and method based on camera communication with framing technology
CN114877898B (en) Sun dynamic tracking method based on underwater polarization attitude and refraction coupling inversion
CN112419410B (en) Horizontal attitude determination method based on underwater Snell window edge identification
CN116087982A (en) Marine water falling person identification and positioning method integrating vision and radar system
Chen et al. Camera geolocation from mountain images
CN114937075B (en) Underwater polarized light field autonomous orientation method based on three-dimensional solar meridian plane fitting
CN110887477B (en) Autonomous positioning method based on north polarization pole and polarized sun vector
Hu et al. Underwater Downwelling Radiance Fields Enable Three-Dimensional Attitude and Heading Determination
Li et al. Adaptively robust filtering algorithm for maritime celestial navigation
CN107941220B (en) Unmanned ship sea antenna detection and navigation method and system based on vision
Pfeiffer et al. Detecting beach litter in drone images using deep learning
CN114445572A (en) Deeplab V3+ based method for instantly positioning obstacles and constructing map in unfamiliar sea area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant