CN117191011A - Target scale recovery method based on inertial/visual information fusion - Google Patents
Target scale recovery method based on inertial/visual information fusion Download PDFInfo
- Publication number
- CN117191011A CN117191011A CN202311034732.8A CN202311034732A CN117191011A CN 117191011 A CN117191011 A CN 117191011A CN 202311034732 A CN202311034732 A CN 202311034732A CN 117191011 A CN117191011 A CN 117191011A
- Authority
- CN
- China
- Prior art keywords
- runway
- end point
- coordinates
- inertial
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000004927 fusion Effects 0.000 title claims abstract description 15
- 238000011084 recovery Methods 0.000 title abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 18
- 238000013519 translation Methods 0.000 claims abstract description 13
- 238000005259 measurement Methods 0.000 claims abstract description 11
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 230000000977 initiatory effect Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The application provides a target scale recovery method for inertial/visual information fusion, which comprises the following steps: filtering and fusing the inertial and visual pose measurement results to obtain a rotation matrix of the second camera position relative to the first camera positionTranslation matrixAccording to the rotation matrixTranslation matrixThe method comprises the steps that a left end point of a runway starting line under a first camera position is in homogeneous pixel coordinates in two frames of runway images, and a right end point of the runway starting line is in homogeneous pixel coordinates in two frames of runway images, so that coordinates of the left end point and the right end point of the runway starting line under the first camera position are obtained respectively; and constructing a runway width estimation model according to the coordinates. The application can solve the problem of measuring the width of the runway and realize the scale recovery of the runway.
Description
Technical Field
The application belongs to the technical field of unmanned aerial vehicle inertia/visual landing navigation, and relates to a target scale recovery method based on inertia/visual information fusion, in particular to a runway width acquisition method based on inertia/visual information fusion (at the moment, the runway width is defined as how many meters).
Background
A key problem of unmanned aerial vehicle landing navigation is how to accurately and reliably acquire relative position and attitude information between the target runway in real time. At present, the unmanned aerial vehicle mainly depends on a differential satellite navigation system or radar guidance to realize relative position measurement between the unmanned aerial vehicle and a target landing site so as to realize autonomous landing, and the capability of resisting radio interference is weak, so that the relative pose information between the unmanned aerial vehicle and the target landing site is difficult to stably and accurately acquire under the condition that radio signals are interfered. The inertial/visual navigation technology is used for measuring the relative position and posture information between the carrier and the target through the visual sensor, fusing the relative position and posture information with the inertial information, ensuring the accuracy, the continuity and the reliability of navigation information, improving the update rate of the information and providing possibility for realizing the high-speed landing of the unmanned aerial vehicle under the condition of radio interference.
In the current visual relative pose measurement method, the basis for realizing landing guidance is that the width of a runway is required to be known in advance, which is the key for acquiring the relative position, and therefore, the runway is required to be mapped in advance, namely, only the runway can fall on a cooperative runway. Therefore, it is needed to solve the runway width measurement problem, and realize the runway 'scale recovery' (determining how many meters the runway is in particular) so as to realize the inertial/visual landing navigation technology to help the unmanned aerial vehicle land on the strange runway or realize the emergency landing on the road.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the prior art or related art.
Therefore, the application provides a target scale recovery method based on inertial/visual information fusion.
The technical scheme of the application is as follows:
according to one aspect, there is provided a target scale restoration method based on inertial/visual information fusion, the method comprising:
based on a visual sensor, acquiring front and rear runway images, and performing visual relative pose calculation, wherein the front and rear runway images correspond to a first camera position and a second camera position respectively;
the inertial sensor is used for acquiring the corresponding carrier position and posture when the front frame image and the rear frame image are acquired, and the inertial sensor is fixedly connected with the vision sensor;
filtering and fusing the inertial and visual pose measurement results to obtain a rotation matrix of the second camera position relative to the first camera positionAnd translation matrix->
According to the rotation matrixAnd translation matrix->And the homogeneous pixel coordinates of the left end point of the runway starting line under the first camera position in two frames of runway images and the homogeneous pixel coordinates of the right end point of the runway starting line in two frames of runway images respectively acquire the coordinates of the left end point of the runway starting line and the right end point under the camera coordinate system under the first camera position>And
according to the coordinatesAnd->And constructing a runway width estimation model.
Further, the acquiring coordinates of the left end point and the right end point of the initial line of the runway in the camera coordinate system under the first camera position specifically includes:
according to the rotation matrixAnd translation matrix->Z is obtained from homogeneous pixel coordinates of a runway starting line left end point under the first camera position in two frames of runway images and homogeneous pixel coordinates of a runway starting line right end point in two frames of runway images respectively 1l And Z 1r ;
From the acquired Z 1l And Z 1r Homogeneous pixel coordinate of left end point of initial line of runway in previous frame image and homogeneous pixel coordinate of right end point of initial line of runway in previous frame imageAnd->
Further, Z is obtained by the following methods, respectively 1l And Z 1r :
Wherein,
at the solution Z 1l In the time-course of which the first and second contact surfaces,
at the solution Z 1r In the time-course of which the first and second contact surfaces,
[u 1l v 1l 1] T and [ u ] 2l v 2l 1] T Respectively obtaining homogeneous pixel coordinates of a left end point of a runway starting line in front and rear frame images; [ u ] 1r v 1r 1] T And [ u ] 2r v 2r 1] T And the coordinates of homogeneous pixels of the right end point of the initial line of the runway in the front and rear two frames of images are respectively, and K is an internal reference of the camera.
Further, the coordinates are obtained by the following formulaAnd->
Wherein K is a camera internal reference; [ u ] 1l v 1l 1] T The homogeneous pixel coordinate of the left end point of the runway starting line in the previous frame of image is obtained; [ u ] 1r v 1r 1] T The homogeneous pixel coordinates of the right end point of the runway starting line in the previous frame image.
Further, a runway width estimation model is constructed from camera coordinates of left and right endpoints of a starting line of the runway at a first camera position by:
wherein W is the width of runway, [ X ] 1l Y 1l Z 1l ] T For the coordinates of the left end point of the runway initiation line in the camera coordinate system at the first camera position, [ X ] 1r Y 1r Z 1r ] T Is the coordinates of the right end point of the runway initiation line in the camera coordinate system at the first camera position.
Further, when the aircraft yaw angle is equal to 0, W is obtained by:
W=X 1r -X 1l 。
according to another aspect, there is provided a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the steps of the above method are implemented when the processor executes the computer program.
According to the technical scheme, the unfamiliar runway width estimation model is built based on the inertia and visual information, dependence of landing tasks on priori information such as runway width and the like can be eliminated, pose measurement of the unmanned aerial vehicle relative to the unfamiliar runway is achieved, the unmanned aerial vehicle is endowed with the capability of autonomously landing on the unfamiliar runway, the flexibility of the unmanned aerial vehicle in executing tasks is greatly improved, the runway width measurement problem is solved, and the runway scale recovery is achieved. The technical scheme can be further popularized and applied to various visual environment perceived scenes, such as unfamiliar obstacle recognition and positioning and the like.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. It is evident that the drawings in the following description are only some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a runway width estimation flow diagram;
FIG. 2 is a schematic diagram of a runway width solution;
FIG. 3 is a simulation result of an embodiment of the present application.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
As shown in fig. 1, in one embodiment of the present application, there is provided a target scale restoration method based on inertial/visual information fusion, the method comprising:
step one, acquiring front and back runway images based on a visual sensor and performing visual relative pose calculation, wherein the front and back runway images correspond to a first camera position and a second camera position respectively;
acquiring the corresponding carrier position and posture when the two frames of images before and after are acquired by using an inertial sensor, wherein the inertial sensor is fixedly connected with a visual sensor;
step three, filtering and fusing the inertial and visual pose measurement results to obtain a rotation matrix of the second camera position relative to the first camera positionAnd translation matrix->
Step four, according to the rotation matrixAnd translation matrix->And the homogeneous pixel coordinates of the left end point of the runway starting line under the first camera position in two frames of runway images and the homogeneous pixel coordinates of the right end point of the runway starting line in two frames of runway images respectively acquire the coordinates of the left end point of the runway starting line and the right end point under the camera coordinate system under the first camera position>And->
Step five, according to the coordinatesAnd->And constructing a runway width estimation model.
In the embodiment of the application, the first step is based on that the vision sensor obtains the images of the runway of the front frame and the rear frame and carries out the visual relative pose resolving, for example, in the landing process of the unmanned aerial vehicle, two frames of runway images are continuously shot, runway characteristic detection is carried out on the two frames of images, then the three-dimensional matching constraint conditions (unique constraint, similarity constraint and the like) are comprehensively considered, the matching is carried out on the basis of the extracted runway characteristics, and resolving is carried out on the basis, and the specific resolving process is well known in the art and is not repeated in detail herein.
In addition, by filtering and fusing the relative pose between the image frames calculated by the vision and the relative pose calculated by the inertia at the corresponding moment, the inertial navigation position calculation error is compensated, and the high-precision carrier motion track is obtained, so that the high-precision baseline length (rotation and translation matrix) is obtained, and the specific calculation process is a conventional means in the field and is not described in detail.
Therefore, the key point of the embodiment of the application is that the runway width model is constructed according to the obtained inertial/visual information fusion information and by combining with the special points, and the model strange runway width estimation model obtained by the method of the embodiment of the application can get rid of the dependence of landing tasks on priori information such as runway width and the like, thereby realizing pose measurement of the unmanned aerial vehicle relative to the strange runway.
By adopting the configuration mode, an unfamiliar runway width estimation model is established based on inertia and visual information, dependence of landing tasks on priori information such as runway width and the like can be eliminated, pose measurement of the unmanned aerial vehicle relative to the unfamiliar runway is realized, the unmanned aerial vehicle is endowed with the capability of autonomously landing on the unfamiliar runway, the flexibility of the unmanned aerial vehicle for executing tasks is greatly improved, the runway width measurement problem is solved, and the runway scale recovery is realized. The technical scheme can be further popularized and applied to various visual environment perceived scenes, such as unfamiliar obstacle recognition and positioning and the like.
In the foregoing embodiment, in order to implement the construction of the model of the width of the strange runway, the acquiring coordinates of the left end point and the right end point of the starting line of the runway in the camera coordinate system under the first camera position specifically includes:
according to the rotation matrixAnd translation matrix->Z is obtained from homogeneous pixel coordinates of a runway starting line left end point under the first camera position in two frames of runway images and homogeneous pixel coordinates of a runway starting line right end point in two frames of runway images respectively 1l And Z 1r ;
From the acquired Z 1l And Z 1r Homogeneous pixel coordinate of left end point of initial line of runway in previous frame image and homogeneous pixel coordinate of right end point of initial line of runway in previous frame imageAnd->
In the embodiment of the application, Z is obtained through the following steps respectively 1l And Z 1r :
Wherein,
at the solution Z 1l In the time-course of which the first and second contact surfaces,
at the solution Z 1r In the time-course of which the first and second contact surfaces,
[u 1l v 1l 1] T and [ u ] 2l v 2l 1] T Respectively obtaining homogeneous pixel coordinates of a left end point of a runway starting line in front and rear frame images; [ u ] 1r v 1r 1] T And [ u ] 2r v 2r 1] T And the coordinates of homogeneous pixels of the right end point of the initial line of the runway in the front and rear two frames of images are respectively, and K is an internal reference of the camera.
In the embodiment of the application, the coordinates are obtained by the following methodAnd->
Wherein K is a camera internal reference; [ u ] 1l v 1l 1] T The homogeneous pixel coordinate of the left end point of the runway starting line in the previous frame of image is obtained; [ u ] 1r v 1r 1] T The homogeneous pixel coordinates of the right end point of the runway starting line in the previous frame image.
In the above embodiment, the runway width estimation model is specifically constructed according to the camera coordinates of the left end point and the right end point of the starting line of the runway at the first camera position by:
wherein W is the width of runway, [ X ] 1l Y 1l Z 1l ] T For the coordinates of the left end point of the runway initiation line in the camera coordinate system at the first camera position, [ X ] 1r Y 1r Z 1r ] T Is the coordinates of the right end point of the runway initiation line in the camera coordinate system at the first camera position.
Preferably, when the aircraft yaw angle is equal to 0, W is obtained by:
W=X 1r -X 1l 。
according to another embodiment, a computer device is provided comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the steps of the above method are implemented when the processor executes the computer program.
In order to better understand the present application, the following describes the construction of the track width model in the embodiment of the present application.
1. The embodiment of the application establishes the following coordinate system
And establishing an airport coordinate system and a camera coordinate system.
Airport coordinate system (a system): taking the intersection point of the initial line of the runway landing tip and the central line of the runway as an origin o a ;x a The shaft is positive in the forward direction along the center line of the runway; y is a The axis is vertical to the plane of the runway and is positive upwards; z a The shaft coincides with the initial line of the runway, and the right direction is positive; o (o) a x a y a z a Forming a right-hand coordinate system; for the coordinates of a point in the airport coordinate system (x a ,y a ,z a ) And (3) representing.
Camera coordinate system (c system): taking an image side principal point of an optical system as an origin o c The method comprises the steps of carrying out a first treatment on the surface of the X when looking at the optical system c The axis is parallel to the longitudinal axis (short side) of the imaging plane coordinate system, and the left direction is positive; y is c The axis is parallel to the transverse axis (long side) of the imaging plane coordinate system, and the downward direction is positive; z c The axis is directed to the observer and is in communication with x c Axes and y c The axes form the right hand coordinate system.
Image coordinate system (i system): the original image coordinate system rotates around the center of the image by 90 degrees clockwise, is a two-dimensional plane coordinate system which is also built in the plane where the photosensitive surface of the camera is located, takes the upper left corner of the image as the origin, takes the right direction of the image as the c axis of the image coordinate system along the horizontal direction of the image, takes the downward direction of the image as the r axis of the image coordinate system along the vertical direction of the image, and the units of the image coordinate system are pixels.
The embodiment of the application obtains the depth information of a plurality of pixel points on the runway edge by utilizing the triangulation principle, and realizes the runway width parameter estimation.
Let the coordinates of a point P in space under two camera coordinate systems be X 1 ,Y 1 ,Z 1 ][X 2 ,Y 2 ,Z 2 ]The homogeneous pixel coordinates of the point in the two frames of images are p respectively uv1 =[u 1 ,v 1 ,1],p uv2 =[u 2 ,v 2 ,1]In cameraWith reference to K, the rotation and translation matrix of the second camera position relative to the first camera position is C 1 2 And T 1 2 ,
According to the principle of pinhole imaging, obtain
The coordinates of P in the two normalized camera coordinate systems can be obtained by the following equation
Substituting formula (2) into formula (1)
X is multiplied by the two sides of (3) 2 Obtain the ratio of
Only Z on the right side of (4) 1 An unknown number can be found. Determination of Z 1 Then Z can be further determined according to the formula (3) 2 . The coordinates of the point P in the first camera coordinate system are
Therefore, based on the above-mentioned deduction process, the left end point and the right end point of the runway starting line adopted in the embodiment of the present application can obtain the position under the first camera coordinate system by adopting the above-mentioned method.
The embodiment of the application carries out simulation analysis on the designed scale recovery method. The position and the gesture of the camera in the world coordinate system are regulated when the camera is internally involved and two frames of images are shot, the width of a runway is 50m, the length of the runway is 2400m, the distance between the unmanned aerial vehicle and the initial line of the runway is 400m, the flying height is 80m, the gesture angle is zero, the position of the second frame of image camera is shifted rightwards by 20m relative to the position of the first frame of image camera, the simulation result is shown in fig. 3, and the accurate calculation of the width and the length of the runway is realized under the condition of no error.
Features that are described and/or illustrated above with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
The method of the application can be realized by hardware or by combining hardware with software. The present application relates to a computer readable program which, when executed by a logic means, enables the logic means to carry out the apparatus or constituent means described above, or enables the logic means to carry out the various methods or steps described above. The present application also relates to a storage medium such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, or the like for storing the above program.
The many features and advantages of the embodiments are apparent from the detailed specification, and thus, it is intended by the appended claims to cover all such features and advantages of the embodiments which fall within the true spirit and scope thereof. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the embodiments of the application to the exact construction and operation illustrated and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope thereof.
The application is not described in detail in a manner known to those skilled in the art.
Claims (7)
1. A target scale restoration method based on inertial/visual information fusion, the method comprising:
based on a visual sensor, acquiring front and rear runway images, and performing visual relative pose calculation, wherein the front and rear runway images correspond to a first camera position and a second camera position respectively;
the inertial sensor is used for acquiring the corresponding carrier position and posture when the front frame image and the rear frame image are acquired, and the inertial sensor is fixedly connected with the vision sensor;
filtering and fusing the inertial and visual pose measurement results to obtain a rotation matrix of the second camera position relative to the first camera positionAnd a translation matrix T 1 2 ;
According to the rotation matrixAnd a translation matrix T 1 2 And the homogeneous pixel coordinates of the left end point of the runway starting line under the first camera position in two frames of runway images and the homogeneous pixel coordinates of the right end point of the runway starting line in two frames of runway images respectively acquire the coordinates of the left end point of the runway starting line and the right end point under the camera coordinate system under the first camera position>
According to the coordinatesAnd constructing a runway width estimation model.
2. The method for recovering a target scale based on inertial/visual information fusion according to claim 1, wherein the acquiring coordinates of the left end point and the right end point of the runway start line in the camera coordinate system of the first camera position specifically comprises:
according to the rotation matrixAnd a translation matrix T 1 2 Z is obtained from homogeneous pixel coordinates of a runway starting line left end point under the first camera position in two frames of runway images and homogeneous pixel coordinates of a runway starting line right end point in two frames of runway images respectively 1l And Z 1r ;
From the acquired Z 1l And Z 1r Homogeneous pixel coordinate of left end point of initial line of runway in previous frame image and homogeneous pixel coordinate of right end point of initial line of runway in previous frame image
3. The method for restoring target dimensions based on inertial/visual information fusion according to claim 2, wherein Z is obtained by the following equations, respectively 1l And Z 1r :
Wherein,
at the solution Z 1l In the time-course of which the first and second contact surfaces,
at the solution Z 1r In the time-course of which the first and second contact surfaces,
[u 1l v 1l 1] T and [ u ] 2l v 2l 1] T Homogeneous images of the left end point of the initial line of the runway in the front and back two frames of images respectivelyCoordinates of the element; [ u ] 1r v 1r 1] T And [ u ] 2r v 2r 1] T And the coordinates of homogeneous pixels of the right end point of the initial line of the runway in the front and rear two frames of images are respectively, and K is an internal reference of the camera.
4. A method for recovering object dimensions based on inertial/visual information fusion according to claim 3, wherein the coordinates are obtained by the following formula
Wherein K is a camera internal reference; [ u ] 1l v 1l 1] T The homogeneous pixel coordinate of the left end point of the runway starting line in the previous frame of image is obtained; [ u ] 1r v 1r 1] T The homogeneous pixel coordinates of the right end point of the runway starting line in the previous frame image.
5. A method of target scale restoration based on inertial/visual information fusion according to any one of claims 1-4, wherein the runway width estimation model is constructed from the camera coordinates of the left and right endpoints of the starting line of the runway at the first camera position by:
wherein W is the width of runway, [ X ] 1l Y 1l Z 1l ] T For the coordinates of the left end point of the runway initiation line in the camera coordinate system at the first camera position, [ X ] 1r Y 1r Z 1r ] T Is the coordinates of the right end point of the runway initiation line in the camera coordinate system at the first camera position.
6. The method for target scale restoration based on inertial/visual information fusion according to claim 5, wherein when the yaw angle of the aircraft is equal to 0, W is obtained by:
W=X 1r -X 1l 。
7. a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method of claims 1-6 when the computer program is executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311034732.8A CN117191011A (en) | 2023-08-17 | 2023-08-17 | Target scale recovery method based on inertial/visual information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311034732.8A CN117191011A (en) | 2023-08-17 | 2023-08-17 | Target scale recovery method based on inertial/visual information fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117191011A true CN117191011A (en) | 2023-12-08 |
Family
ID=89000746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311034732.8A Pending CN117191011A (en) | 2023-08-17 | 2023-08-17 | Target scale recovery method based on inertial/visual information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117191011A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06144394A (en) * | 1992-11-04 | 1994-05-24 | Mitsubishi Heavy Ind Ltd | Displaced amount calculating device for airframe |
US20090214080A1 (en) * | 2008-02-27 | 2009-08-27 | Honeywell International Inc. | Methods and apparatus for runway segmentation using sensor analysis |
CN107255476A (en) * | 2017-07-06 | 2017-10-17 | 青岛海通胜行智能科技有限公司 | A kind of indoor orientation method and device based on inertial data and visual signature |
CN108886572A (en) * | 2016-11-29 | 2018-11-23 | 深圳市大疆创新科技有限公司 | Adjust the method and system of image focal point |
CN109448045A (en) * | 2018-10-23 | 2019-03-08 | 南京华捷艾米软件科技有限公司 | Plane polygon object measuring method and machine readable storage medium based on SLAM |
CN114331986A (en) * | 2021-12-21 | 2022-04-12 | 中国长江三峡集团有限公司 | Dam crack identification and measurement method based on unmanned aerial vehicle vision |
CN115578417A (en) * | 2022-10-16 | 2023-01-06 | 北京眸星科技有限公司 | Monocular vision inertial odometer method based on feature point depth |
CN116051537A (en) * | 2023-02-09 | 2023-05-02 | 中国农业科学院农业基因组研究所 | Crop plant height measurement method based on monocular depth estimation |
CN116188550A (en) * | 2021-11-26 | 2023-05-30 | 北京航空航天大学 | Self-supervision depth vision odometer based on geometric constraint |
-
2023
- 2023-08-17 CN CN202311034732.8A patent/CN117191011A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06144394A (en) * | 1992-11-04 | 1994-05-24 | Mitsubishi Heavy Ind Ltd | Displaced amount calculating device for airframe |
US20090214080A1 (en) * | 2008-02-27 | 2009-08-27 | Honeywell International Inc. | Methods and apparatus for runway segmentation using sensor analysis |
CN108886572A (en) * | 2016-11-29 | 2018-11-23 | 深圳市大疆创新科技有限公司 | Adjust the method and system of image focal point |
CN107255476A (en) * | 2017-07-06 | 2017-10-17 | 青岛海通胜行智能科技有限公司 | A kind of indoor orientation method and device based on inertial data and visual signature |
CN109448045A (en) * | 2018-10-23 | 2019-03-08 | 南京华捷艾米软件科技有限公司 | Plane polygon object measuring method and machine readable storage medium based on SLAM |
CN116188550A (en) * | 2021-11-26 | 2023-05-30 | 北京航空航天大学 | Self-supervision depth vision odometer based on geometric constraint |
CN114331986A (en) * | 2021-12-21 | 2022-04-12 | 中国长江三峡集团有限公司 | Dam crack identification and measurement method based on unmanned aerial vehicle vision |
CN115578417A (en) * | 2022-10-16 | 2023-01-06 | 北京眸星科技有限公司 | Monocular vision inertial odometer method based on feature point depth |
CN116051537A (en) * | 2023-02-09 | 2023-05-02 | 中国农业科学院农业基因组研究所 | Crop plant height measurement method based on monocular depth estimation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
JP5991952B2 (en) | A method for determining the camera's attitude to the surroundings | |
Alonso et al. | Accurate global localization using visual odometry and digital maps on urban environments | |
WO2020113423A1 (en) | Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle | |
CN105021184A (en) | Pose estimation system and method for visual carrier landing navigation on mobile platform | |
KR20150144729A (en) | Apparatus for recognizing location mobile robot using key point based on gradient and method thereof | |
CN108519102B (en) | Binocular vision mileage calculation method based on secondary projection | |
CN108917753B (en) | Aircraft position determination method based on motion recovery structure | |
Lagisetty et al. | Object detection and obstacle avoidance for mobile robot using stereo camera | |
CN113781562B (en) | Lane line virtual-real registration and self-vehicle positioning method based on road model | |
CN104281148A (en) | Mobile robot autonomous navigation method based on binocular stereoscopic vision | |
KR20200046437A (en) | Localization method based on images and map data and apparatus thereof | |
CN110749308B (en) | SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models | |
Troiani et al. | Low computational-complexity algorithms for vision-aided inertial navigation of micro aerial vehicles | |
Angelino et al. | High altitude UAV navigation using IMU, GPS and camera | |
CN111829532A (en) | Aircraft repositioning system and method | |
Williams et al. | Feature and pose constrained visual aided inertial navigation for computationally constrained aerial vehicles | |
US10977810B2 (en) | Camera motion estimation | |
Vemprala et al. | Monocular vision based collaborative localization for micro aerial vehicle swarms | |
CN113570662A (en) | System and method for 3D localization of landmarks from real world images | |
JP7179687B2 (en) | Obstacle detector | |
CN114638897A (en) | Multi-camera system initialization method, system and device based on non-overlapping views | |
US20200334900A1 (en) | Landmark location reconstruction in autonomous machine applications | |
Li et al. | Metric sensing and control of a quadrotor using a homography-based visual inertial fusion method | |
Padial et al. | Tumbling target reconstruction and pose estimation through fusion of monocular vision and sparse-pattern range data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |