CN113403942A - Label-assisted bridge detection unmanned aerial vehicle visual navigation method - Google Patents

Label-assisted bridge detection unmanned aerial vehicle visual navigation method Download PDF

Info

Publication number
CN113403942A
CN113403942A CN202110767675.9A CN202110767675A CN113403942A CN 113403942 A CN113403942 A CN 113403942A CN 202110767675 A CN202110767675 A CN 202110767675A CN 113403942 A CN113403942 A CN 113403942A
Authority
CN
China
Prior art keywords
coordinate system
unmanned aerial
aerial vehicle
matrix
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110767675.9A
Other languages
Chinese (zh)
Other versions
CN113403942B (en
Inventor
张夷斋
杨奇磊
黄攀峰
张帆
刘正雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110767675.9A priority Critical patent/CN113403942B/en
Publication of CN113403942A publication Critical patent/CN113403942A/en
Application granted granted Critical
Publication of CN113403942B publication Critical patent/CN113403942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01DCONSTRUCTION OF BRIDGES, ELEVATED ROADWAYS OR VIADUCTS; ASSEMBLY OF BRIDGES
    • E01D19/00Structural or constructional details of bridges
    • E01D19/10Railings; Protectors against smoke or gases, e.g. of locomotives; Maintenance travellers; Fastening of pipes or cables to bridges
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding

Abstract

The invention discloses a label-assisted bridge detection unmanned aerial vehicle visual navigation method, which comprises the following steps of arranging a plurality of positioning two-dimensional code labels on a bridge to be detected at intervals along the flight route of an unmanned aerial vehicle; continuously observing the surface of the bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label; determining a first conversion relation matrix between the two-dimensional code label and the camera according to the coordinate value combination; combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; replacing second position information of the unmanned aerial vehicle obtained through VIO with the first position information, and continuing unmanned aerial vehicle navigation; the invention can eliminate the drift error generated by the VIO algorithm, improve the positioning precision and reduce the workload of back-end optimization.

Description

Label-assisted bridge detection unmanned aerial vehicle visual navigation method
Technical Field
The invention belongs to the technical field of large-span bridge detection, and particularly relates to a label-assisted Unmanned Aerial Vehicle (UAV) visual navigation method for bridge detection.
Background
With the development of innovation, national economy is rapidly developed, the number of bridges in China is greatly increased, and the safety detection of the bridges becomes a problem which needs to be considered. When the manual detection means is applied to a bridge with high altitude, deep water, wide width and complex structure, the outstanding engineering problems of high detection difficulty, low efficiency, large blind area, safety and the like are faced, and the bridge detection difficulty lies in the detection of inaccessible areas under the bridge. Therefore, the method with development prospect at present adopts the unmanned aerial vehicle to carry out bridge detection.
At present, most of the navigation technologies adopted by unmanned aerial vehicles are GPS satellite navigation, inertial navigation or combined navigation, and the like. Such as strapdown inertial navigation, laser radar and inertial navigation fusion, etc. For bridge detection, due to complex environment and weak signals in the area under the bridge, the navigation method is not suitable for being adopted, the Visual navigation method is mainly based on SLAM (Simultaneous localization and mapping, instant positioning and map reconstruction), in order to make up for the defects of uncertain scale of SLAM and the like, an IMU (inertial navigation Unit) is added on the basis of SLAM at present, and the robustness is improved, namely, the current mainstream navigation method based on VIO (Visual-inertial odometer).
The VIO research is relatively mature, but the algorithm frame comprises Kalman filtering, pre-integration, a Gauss-Newton method and the like, so that the algorithm complexity is high, and the resource occupation is large.
Disclosure of Invention
The invention aims to provide a label-assisted Unmanned Aerial Vehicle (UAV) visual navigation method for bridge detection, which improves navigation precision, reduces calculated amount and improves large-span bridge detection efficiency.
The invention adopts the following technical scheme: a tag-assisted bridge detection unmanned aerial vehicle visual navigation method comprises the following steps:
arranging a plurality of positioning two-dimensional code labels on the bridge to be detected at intervals along the flight path of the unmanned aerial vehicle;
continuously observing the surface of the bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label;
determining a first conversion relation matrix between the two-dimensional code label and the camera according to the coordinate value combination;
combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; the second conversion relation matrix is a conversion relation matrix between the unmanned aerial vehicle and the camera;
and replacing the second position information of the unmanned aerial vehicle obtained by the VIO with the first position information, and continuing to perform unmanned aerial vehicle navigation.
Further, before the unmanned aerial vehicle takes off, coordinate values corresponding to the plurality of positioning two-dimensional code labels are combined and stored in storage equipment of the unmanned aerial vehicle;
and the coordinate value combination consists of coordinate values of four corner points of the positioning two-dimensional code label.
Further, determining a first conversion relation matrix between the two-dimensional code tag and the camera according to the coordinate value combination comprises:
the coordinate value combination is brought into a preset relational expression, and eight equations can be obtained;
the relation is as follows:
Figure BDA0003152493450000021
wherein f is the focal length of the camera; f. ofx=1/dx,dxIs the real physical scale of the unit pixel on the u axis in the image coordinate system; f. ofy=1/dy,dyIs the real physical scale of the unit pixel on the v-axis in the image coordinate system; r3×3Is a rotation matrix of the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000031
(u0,v0) Coordinates of a central point of the image in an image coordinate system; coordinates of a point in pixel coordinates are represented by (u, v); with (x)w,yw,zw) Coordinates representing points in a world coordinate system; by T3×1Is a translation matrix from the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000032
solving a rotation matrix R by combining eight equations3×3And translation matrix T3×1
Combined rotation matrix R3×3And translation matrix T3×1Obtaining a first transformation relation matrix
Figure BDA0003152493450000033
Wherein, T1Is a first transformation relation matrix.
Further, the relation is obtained by the following method:
determining the conversion relation from image coordinate system to pixel coordinate system
Figure BDA0003152493450000034
And a transformation matrix
Figure BDA0003152493450000035
Wherein, (x, y) is the coordinate of the midpoint in the image coordinate system;
determining a transformation matrix from a camera coordinate system to an image coordinate system
Figure BDA0003152493450000036
Wherein (x)c,yc,zc) As coordinates of a point in the camera coordinate system, ZcIs a scale transformation factor;
determining a transformation matrix from a world coordinate system to a camera coordinate system
Figure BDA0003152493450000041
Determining a transformation matrix from a world coordinate system to a pixel coordinate system
Figure BDA0003152493450000042
And expanding and simplifying a conversion matrix from the world coordinate system to the pixel coordinate system to obtain a relational expression.
Further, expanding and simplifying the transformation matrix from the world coordinate system to the pixel coordinate system comprises:
expanding a conversion matrix from a world coordinate system to a pixel coordinate system to obtain:
Figure BDA0003152493450000043
simplifying the transformation matrix from the expanded world coordinate system to the pixel coordinate system can obtain:
Figure BDA0003152493450000044
further, the distance between two adjacent positioning two-dimensional code labels is 10-20 m.
Furthermore, the positioning two-dimensional code label position rectangle has an area size of 0.1-0.3 m2
The invention has the beneficial effects that: the invention provides a navigation method of a bridge detection unmanned aerial vehicle for correcting VIO drift error based on label assistance for a specific scene, namely, in long-span bridge detection, a bridge detection area is provided with labels according to a specified method, in a label observable area, the drift error generated by a VIO algorithm can be eliminated by fusing VIO calculation data and label calculation data in real time, so that the precision of position data is improved, the positioning precision is improved, and the workload of back-end optimization (error processing) is reduced.
Drawings
Fig. 1 is a schematic flow chart of a tag-assisted bridge inspection unmanned aerial vehicle visual navigation method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a transformation relationship between coordinate systems according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a two-dimensional code tag and its corner coordinates for positioning in an embodiment of the present invention;
fig. 4 is a schematic view of an application scenario according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention discloses a label-assisted bridge detection unmanned aerial vehicle visual navigation method, which comprises the following steps of:
s10, arranging a plurality of positioning two-dimensional code labels on the bridge to be detected at intervals along the flight path of the unmanned aerial vehicle; s20, continuously observing the surface of the bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label; step S30, determining a first conversion relation matrix between the two-dimensional code label and the camera according to the coordinate value combination; step S40, combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; the second conversion relation matrix is a conversion relation matrix between the unmanned aerial vehicle and the camera; and step S50, replacing the second position information of the unmanned aerial vehicle obtained by the VIO with the first position information, and continuing to perform unmanned aerial vehicle navigation.
The method is a bridge detection unmanned aerial vehicle navigation method for correcting VIO drift error based on label assistance in specific scenes, namely in long-span bridge detection, labels are arranged in a bridge detection area according to a specified method, and in a label observable area, the drift error generated by a VIO algorithm can be eliminated by fusing VIO calculation data and label calculation data in real time, so that the precision of position data is improved, the positioning precision is improved, and the workload of back-end optimization (error processing) is reduced.
As shown in fig. 4, the method of the embodiment is mainly applied to a situation that an inaccessible area is detected in a bridge, an unmanned aerial vehicle flies along the length direction of the bridge, image information of the bridge is acquired through photographing or photographing equipment carried by the unmanned aerial vehicle, and the state of the bridge is analyzed according to the image information at a later stage.
In the embodiment of the present invention, first, a mathematical model based on the VIO method needs to be established, including a motion equation and an observation equation. Specifically, the pose of the automatic mobile unmanned aerial vehicle is represented by x, the time is discretized, and the pose at each moment is recorded as x1、x2……xkThen pose can be expressed as xk=[pk vk qk ak bk]Wherein p iskPosition of the autonomous mobile drone, vkRepresenting the speed of the autonomous moving drone, qkRepresenting the attitude of the drone, akRepresents acceleration, bkRepresenting the bias of the gyroscope.
The connected positions and postures at all times are obtained to be the track of the autonomous mobile unmanned aerial vehicle, and can be expressed by the following equation (namely a motion equation): x is the number ofk+1=f(xk,o,wk) Where o is the reading of the sensors (visual and inertial) in the motion of the drone, wkIs ambient noise.
Suppose there are n landmark points in the map, with y1、y2……ynExpressed, then the observation equation is zkj=h(yj,xk,Vkj) Wherein V iskjIs the observation noise, zkjAre observed data.
After the mathematical model based on the VIO method is established, the construction and transformation relationship of each coordinate system also needs to be explained. Defining a pixel coordinate system o-uv, an image coordinate system o-xy, and a camera coordinate system o-xcyczcWorld coordinate system o-xwywzwAnd finishing the derivation of the coordinate transformation relation.
The pixel coordinate system is defined to be a two-dimensional rectangular coordinate system, the origin is located at the upper left corner of the image, and the two axes are respectively parallel to the two vertical sides of the image. The origin of the image coordinate system is the intersection point of the optical axis of the camera and the phase plane, the center of the image is taken, and the two axes are respectively parallel to the two axes of the pixel coordinate system. The camera coordinate system is a three-dimensional rectangular coordinate system, the origin is located at the optical center of the lens, the x axis and the y axis are respectively parallel to two sides of the phase plane, and the z axis is the optical axis of the lens and is vertical to the image plane. The world coordinate system is also the spatial coordinate system described by the real position in space.
The following also needs to perform the combing of the coordinate transformation of each coordinate system.
First, determining the transformation relationship from image coordinate system to pixel coordinate system
Figure BDA0003152493450000071
And a transformation matrix
Figure BDA0003152493450000072
Wherein, (x, y) is the coordinates of the point in the image coordinate system.
Secondly, determining a transformation matrix from a camera coordinate system to an image coordinate system according to a pinhole imaging principle and combining a triangle similarity principle
Figure BDA0003152493450000073
Wherein (x)c,yc,zc) As coordinates of a point in the camera coordinate system, ZcIs a scale transformation factor.
Thirdly, determining a transformation matrix from the world coordinate system to the camera coordinate system
Figure BDA0003152493450000074
Through the three transformation matrixes, the transformation matrix from the world coordinate system to the pixel coordinate system can be deduced and determined
Figure BDA0003152493450000075
And expanding the above formula and simplifying a conversion matrix from the world coordinate system to the pixel coordinate system to obtain a relational expression. The method specifically comprises the following steps:
expanding a conversion matrix from a world coordinate system to a pixel coordinate system to obtain:
Figure BDA0003152493450000081
simplifying the transformation matrix from the expanded world coordinate system to the pixel coordinate system can obtain:
Figure BDA0003152493450000082
wherein f is the focal length of the camera; f. ofx=1/dx,dxIs the real physical scale of the unit pixel on the u axis in the image coordinate system; f. ofy=1/dy,dyIs the real physical scale of the unit pixel on the v-axis in the image coordinate system; r3×3Is a rotation matrix of the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000083
(u0,v0) Coordinates of a central point of the image in an image coordinate system; the coordinates of the point in pixel coordinates are represented by (u, v); with (x)w,yw,zw) Coordinates representing points in a world coordinate system; by T3×1Is a translation matrix from the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000084
in the embodiment of the invention, the positioning two-dimensional code label (namely, the label) has the following characteristics:
the label needs to have definite shape characteristics, and the shape of the label can be selected to be square or rectangular in the embodiment of the invention; the color can be distinguished from that of the bridge, and red, yellow, green and the like can be selected; for convenient identification, the size of the label is not too small, and for cost control, the size of the label is selected to be 0.1-0.3 m2. The number of the labels is determined according to the detection length, and is generally l/s (l is the detection length, s is the correction distance, and s is generally 10-20 m). The label adopts a two-dimensional code format, records coordinates of four vertexes of the label corresponding to the accurate installation position, and can be identified through two-dimensional code identification software.
In addition, before the unmanned aerial vehicle takes off, the coordinate value combinations corresponding to the plurality of positioning two-dimensional code tags need to be stored in the storage device of the unmanned aerial vehicle, so that the corresponding coordinate value information can be conveniently acquired after the tags are identified. And the coordinate value combination consists of coordinate values of four corner points of the positioning two-dimensional code label.
In the embodiment of the invention, the label material needs to meet the characteristics of high temperature resistance, corrosion resistance, moisture resistance, weather resistance and the like, and is suitable for the special environment of the bottom of a bridge.
For the installation method of the label, the label can be placed at a position convenient for installation of the bridge, and the label can be placed at intervals of s.
In summary, the coordinate value combinations of the labels can be known. Then, after the tag is detected, a first conversion relation matrix between the two-dimensional code tag and the camera needs to be determined according to the coordinate value combination. The method comprises the following specific steps:
combining the coordinate values ((x)wi,ywi,zwi) I ═ 1,2,3,4) into the preset relation, we can get:
Figure BDA0003152493450000091
because the shape and the size of the label are known, namely the positions of all corner points of the label are known, four constraint conditions, namely eight equations can be obtained, and the rotation matrix R is solved by combining the eight equations3×3And translation matrix T3×1(ii) a Combined rotation matrix R3×3And translation matrix T3×1Obtaining a first transformation relation matrix
Figure BDA0003152493450000092
Wherein, T1Is a first transformation relation matrix.
According to the installation position of the camera on the unmanned aerial vehicle holder, the relative relation between the position of the camera and the position of the unmanned aerial vehicle can be obtained and recorded as T2. Therefore, the tag position versus drone position coordinate transformation relationship may be represented in a matrix as: t is1*T2. Since all the position information of the tags is known, the position can be resolved according to the position relation.
For a large-span bridge, the continuous observation of the label is realized with certain difficulty, so that the embodimentThe assumed label observation is discontinuous, the label of the bridge detection unmanned aerial vehicle at the moment k can be completely recorded by the image sensor, four corner points can be completely recorded, the two-dimensional code on the label can be identified, the unmanned aerial vehicle acquires the coordinate information of the four corner points of the label stored in the memory module in advance through the two-dimensional code identification module, the position information is resolved according to the method, and the position information resolved according to the label is recorded as pkVIO-based positioning calculation result is known and is marked as p'kBy pkSubstitute of p'kTherefore, the purpose of real-time correction of the VIO navigation position error by the label can be realized.
According to the method, the label-assisted error correction method is provided, namely, the label is placed below the bridge in advance, the unmanned aerial vehicle can pass the accurate position information of the label at the moment when the label can be observed, the position information is accurate and does not contain the accumulated error, so that the position error in positioning can be corrected, the positioning accuracy is improved, and long-time positioning is facilitated.
The method is low in cost, suitable for being applied to bridge detection scenes, and effective for correcting drift errors of the VIO method. In addition, the drift error of the VIO method can be corrected by the aid of the labels, so that the calculation amount of the original VIO method rear-end optimization is reduced, the operation cost is reduced, the hardware space is saved, and the load of the bridge detection unmanned aerial vehicle can be reduced.

Claims (7)

1. A tag-assisted-based bridge detection unmanned aerial vehicle visual navigation method is characterized by comprising the following steps:
arranging a plurality of positioning two-dimensional code labels on the bridge to be detected at intervals along the flight path of the unmanned aerial vehicle;
continuously observing the surface of a bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label;
determining a first conversion relation matrix between the positioning two-dimensional code label and the camera according to the coordinate value combination;
combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; wherein the second conversion relation matrix is a conversion relation matrix between the unmanned aerial vehicle and the camera;
and replacing the second position information of the unmanned aerial vehicle obtained by the VIO with the first position information, and continuing to perform unmanned aerial vehicle navigation.
2. The visual navigation method for the bridge inspection unmanned aerial vehicle based on the tag assistance of claim 1, wherein before the unmanned aerial vehicle takes off, coordinate value combinations corresponding to a plurality of positioning two-dimensional code tags are stored in a storage device of the unmanned aerial vehicle;
and the coordinate value combination consists of coordinate values of four corner points of the positioning two-dimensional code label.
3. The tag-assisted-based visual navigation method for bridge inspection unmanned aerial vehicle of claim 1, wherein determining the first transformation relationship matrix between the positioning two-dimensional code tag and the camera according to the coordinate value combination comprises:
bringing the coordinate value combination into a preset relational expression to obtain eight equations;
the relation is as follows:
Figure FDA0003152493440000021
wherein f is the focal length of the camera; f. ofx=1/d,dxIs the real physical scale of the unit pixel on the u axis in the image coordinate system; f. ofy=1/dy,dyIs the real physical scale of the unit pixel on the v-axis in the image coordinate system; r3×3Is a rotation matrix of the world coordinate system to the camera coordinate system,
Figure FDA0003152493440000022
(u0,v0) Coordinates of a central point of the image in an image coordinate system; coordinates of a point in pixel coordinates are represented by (u, v); with (x)w,yw,zw) Coordinates representing points in a world coordinate system; by T3×1Is a translation matrix from the world coordinate system to the camera coordinate system,
Figure FDA0003152493440000023
solving the rotation matrix R by combining the eight equations3×3And translation matrix T3×1
Incorporating said rotation matrix R3×3And translation matrix T3×1Obtaining a first transformation relation matrix
Figure FDA0003152493440000024
Wherein, T1Is the first transformation relation matrix.
4. The label-assisted-based visual navigation method for bridge inspection unmanned aerial vehicle of claim 3, wherein the relation is obtained by:
determining the conversion relation from image coordinate system to pixel coordinate system
Figure FDA0003152493440000025
And a transformation matrix
Figure FDA0003152493440000026
Wherein, (x, y) is the coordinate of the midpoint in the image coordinate system;
determining a transformation matrix from a camera coordinate system to an image coordinate system
Figure FDA0003152493440000031
Wherein (x)c,yc,zc) As coordinates of a point in the camera coordinate system, ZcIs a scale transformation factor;
determining a transformation matrix from a world coordinate system to a camera coordinate system
Figure FDA0003152493440000032
Determining a transformation matrix from a world coordinate system to a pixel coordinate system
Figure FDA0003152493440000033
And expanding and simplifying a conversion matrix from the world coordinate system to the pixel coordinate system to obtain the relational expression.
5. The visual navigation method for the bridge inspection unmanned aerial vehicle based on the label assistance as claimed in claim 4, wherein the expanding and simplifying the transformation matrix from the world coordinate system to the pixel coordinate system comprises:
expanding the conversion matrix from the world coordinate system to the pixel coordinate system to obtain:
Figure FDA0003152493440000034
simplifying the expanded transformation matrix from the world coordinate system to the pixel coordinate system to obtain:
Figure FDA0003152493440000035
6. the visual navigation method for the bridge detection unmanned aerial vehicle based on the label assistance as claimed in claims 2-5, wherein the distance between two adjacent positioning two-dimensional code labels is 10-20 m.
7. The label-assisted-based visual navigation method for unmanned aerial vehicle for bridge inspection, according to claim 6, wherein the positioning two-dimensional code label position rectangle is 0.1-0.3 m in area2
CN202110767675.9A 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method Active CN113403942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110767675.9A CN113403942B (en) 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110767675.9A CN113403942B (en) 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method

Publications (2)

Publication Number Publication Date
CN113403942A true CN113403942A (en) 2021-09-17
CN113403942B CN113403942B (en) 2022-11-15

Family

ID=77685421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110767675.9A Active CN113403942B (en) 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method

Country Status (1)

Country Link
CN (1) CN113403942B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237262A (en) * 2021-12-24 2022-03-25 陕西欧卡电子智能科技有限公司 Automatic mooring method and system for unmanned ship on water

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000337853A (en) * 1999-05-27 2000-12-08 Denso Corp Detecting device for record position of information code, and optical information reading device
KR20160022065A (en) * 2014-08-19 2016-02-29 한국과학기술원 System for Inspecting Inside of Bridge
CN106556341A (en) * 2016-10-08 2017-04-05 浙江国自机器人技术有限公司 A kind of shelf pose deviation detecting method and system of feature based information graphic
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106645205A (en) * 2017-02-24 2017-05-10 武汉大学 Unmanned aerial vehicle bridge bottom surface crack detection method and system
CN106969766A (en) * 2017-03-21 2017-07-21 北京品创智能科技有限公司 A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign
CN109060281A (en) * 2018-09-18 2018-12-21 山东理工大学 Integrated Detection System for Bridge based on unmanned plane
CN110533718A (en) * 2019-08-06 2019-12-03 杭州电子科技大学 A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark
CN110705433A (en) * 2019-09-26 2020-01-17 杭州鲁尔物联科技有限公司 Bridge deformation monitoring method, device and equipment based on visual perception
CN112581795A (en) * 2020-12-16 2021-03-30 东南大学 Video-based real-time early warning method and system for ship bridge and ship-to-ship collision

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000337853A (en) * 1999-05-27 2000-12-08 Denso Corp Detecting device for record position of information code, and optical information reading device
KR20160022065A (en) * 2014-08-19 2016-02-29 한국과학기술원 System for Inspecting Inside of Bridge
CN106556341A (en) * 2016-10-08 2017-04-05 浙江国自机器人技术有限公司 A kind of shelf pose deviation detecting method and system of feature based information graphic
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106645205A (en) * 2017-02-24 2017-05-10 武汉大学 Unmanned aerial vehicle bridge bottom surface crack detection method and system
CN106969766A (en) * 2017-03-21 2017-07-21 北京品创智能科技有限公司 A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign
CN109060281A (en) * 2018-09-18 2018-12-21 山东理工大学 Integrated Detection System for Bridge based on unmanned plane
CN110533718A (en) * 2019-08-06 2019-12-03 杭州电子科技大学 A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark
CN110705433A (en) * 2019-09-26 2020-01-17 杭州鲁尔物联科技有限公司 Bridge deformation monitoring method, device and equipment based on visual perception
CN112581795A (en) * 2020-12-16 2021-03-30 东南大学 Video-based real-time early warning method and system for ship bridge and ship-to-ship collision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237262A (en) * 2021-12-24 2022-03-25 陕西欧卡电子智能科技有限公司 Automatic mooring method and system for unmanned ship on water
CN114237262B (en) * 2021-12-24 2024-01-19 陕西欧卡电子智能科技有限公司 Automatic berthing method and system for unmanned ship on water surface

Also Published As

Publication number Publication date
CN113403942B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
Balamurugan et al. Survey on UAV navigation in GPS denied environments
CN103822635B (en) The unmanned plane during flying spatial location real-time computing technique of view-based access control model information
CN106017463A (en) Aircraft positioning method based on positioning and sensing device
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN109460046B (en) Unmanned aerial vehicle natural landmark identification and autonomous landing method
CN111024072B (en) Satellite map aided navigation positioning method based on deep learning
CN113985429A (en) Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar
KR102239562B1 (en) Fusion system between airborne and terrestrial observation data
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
JP2023021098A (en) Map construction method, apparatus, and storage medium
CN113403942B (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
Vezinet et al. State of the art of image-aided navigation techniques for aircraft approach and landing
CN113673386A (en) Method for marking traffic signal lamp in prior-to-check map
CN117470259A (en) Primary and secondary type space-ground cooperative multi-sensor fusion three-dimensional map building system
CN113155126A (en) Multi-machine cooperative target high-precision positioning system and method based on visual navigation
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
CN112489118B (en) Method for quickly calibrating external parameters of airborne sensor group of unmanned aerial vehicle
Ready et al. Inertially aided visual odometry for miniature air vehicles in gps-denied environments
Ready et al. Improving accuracy of MAV pose estimation using visual odometry
Ishii et al. Autonomous UAV flight using the Total Station Navigation System in Non-GNSS Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant