CN117058209B - Method for calculating depth information of visual image of aerocar based on three-dimensional map - Google Patents

Method for calculating depth information of visual image of aerocar based on three-dimensional map Download PDF

Info

Publication number
CN117058209B
CN117058209B CN202311308155.7A CN202311308155A CN117058209B CN 117058209 B CN117058209 B CN 117058209B CN 202311308155 A CN202311308155 A CN 202311308155A CN 117058209 B CN117058209 B CN 117058209B
Authority
CN
China
Prior art keywords
camera
imaging
depth information
aerocar
dimensional map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311308155.7A
Other languages
Chinese (zh)
Other versions
CN117058209A (en
Inventor
颜军
董文岳
杨革
孙勋
梁丽娜
胡洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Haiou Flying Automobile Group Co ltd
Shandong Orion Electronics Co ltd
Original Assignee
Guangdong Haiou Flying Automobile Group Co ltd
Shandong Orion Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Haiou Flying Automobile Group Co ltd, Shandong Orion Electronics Co ltd filed Critical Guangdong Haiou Flying Automobile Group Co ltd
Priority to CN202311308155.7A priority Critical patent/CN117058209B/en
Publication of CN117058209A publication Critical patent/CN117058209A/en
Application granted granted Critical
Publication of CN117058209B publication Critical patent/CN117058209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Instructional Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional map-based method for calculating visual image depth information of a flying car, which belongs to the technical field of visual image data processing of the flying car and an unmanned aerial vehicle and comprises the following steps: step one, determining position and posture information of a flying automobile; step two, determining position and posture information of the camera; step three, determining a camera visual image imaging plane; step four, introducing a digital three-dimensional map; step five, calculating depth information of each pixel point; and step six, generating visual image depth information. Has the following advantages: visual image depth information can be obtained without additionally adding a ranging laser radar, so that the hardware complexity of the aerocar is reduced, the loading capacity of the aerocar is improved, the manufacturing cost of the aerocar is reduced, and the application and popularization of the aerocar are facilitated.

Description

Method for calculating depth information of visual image of aerocar based on three-dimensional map
Technical Field
The invention belongs to the technical field of visual image data processing of a flying car and an unmanned aerial vehicle, and particularly relates to a three-dimensional map-based flying car visual image depth information calculation method, which generates depth information of a flying car visual image according to digital three-dimensional map data.
Background
The intelligent flight driving system mainly comprises a perception module, a decision module and a control module, wherein the perception module is the basis of intelligent flight driving of the aerocar, mainly senses the surrounding environment of the aerocar through various sensors, acquires visual scenes and obstacle information around the aerocar, and provides the visual scenes and the obstacle information for the decision module of the aerocar to carry out path planning. The visual camera is a common flying car environment sensing sensor, scene shooting is carried out based on the visual camera, scene recognition is carried out, and the technology is mature, stable and reliable. However, the scene shot by the camera is a two-dimensional image, lacks image depth information (namely distance information), and cannot be directly applied to three-dimensional reconstruction of the flying environment of the flying automobile. To increase depth information, it is often necessary to add ranging sensors, such as using lidar technology, based on which a point cloud is generated. The laser radar equipment is increased, so that the aerocar becomes more complicated and bulkier, the weight of the matched structure is increased, the loading capacity of the aerocar is reduced, the laser radar is high in price, the manufacturing cost of the aerocar is increased, and popularization and application of the aerocar are not facilitated.
Disclosure of Invention
Aiming at the defects, the invention provides a method for calculating the depth information of the visual image of the flying car based on a three-dimensional map, which is used for determining the position and the posture information of a camera based on the position and the posture information of the flying car, determining the imaging plane of the visual image of the camera and the position of the camera corresponding to each pixel under a camera coordinate system according to the parameter information of the camera, introducing the three-dimensional digital map, and determining and calculating the depth information of each pixel, thereby obtaining the depth information of the visual image. According to the method, visual image depth information can be obtained without additionally adding a ranging laser radar, so that the hardware complexity of the aerocar is reduced, the loading capacity of the aerocar is improved, the manufacturing cost of the aerocar is reduced, and the application and popularization of the aerocar are facilitated.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for calculating visual image depth information of a flying car based on a three-dimensional map comprises the following steps:
step one, determining position and posture information of a flying automobile;
step two, determining position and posture information of the camera;
step three, determining a camera visual image imaging plane;
step four, introducing a digital three-dimensional map;
step five, calculating depth information of each pixel point;
and step six, generating visual image depth information.
Further, the specific process of the first step is as follows:
the position and attitude information of the aerocar can be determined according to a vehicle-mounted inertial navigation system and a vehicle-mounted satellite navigation system, the position and attitude information is expressed under an inertial system, and can be selected as a northeast coordinate system, the position and attitude positioning mode is in a longitude and latitude high form (lon 0, lat0 and alt 0), wherein lon0 is the position longitude of the aerocar, lat0 is the position latitude of the aerocar, and alt0 is the flight altitude of the aerocar; the attitude positioning form is an Euler angle form (yaw, pitch, roll), wherein yaw is a yaw angle, pitch is a pitch angle, roll is a roll angle, and the Euler angle coordinate conversion form adopts Z-Y-X sequential rotation conversion.
Further, the specific process of the second step is as follows:
the camera is arranged on the position of the aerocar, the coordinates of the camera are (X, Y, Z) under the coordinate system of the aerocar body, the position of the camera is determined by converting the camera into the inertial system (X, Y, Z) of northeast days, and the longitude and latitude high position (lon, lat, alt) of the camera is obtained by utilizing a distance and longitude and latitude conversion mode;
the camera position is expressed as under northeast day inertia:
wherein,down-converting the matrix from the body coordinate system to the inertial coordinate system, down-converting the matrix from the inertial coordinate system to the body coordinate system>Is transformed from inertial to bulk coordinate system down>The attitude Euler angle of the flying automobile can be calculated by the following calculation formula:
the longitude and latitude height calculation formula of the camera is as follows:
wherein,for the earth radius, average 6371000m can be taken;
according to the installation position and the installation angle of the camera on the aerocar, determining the imaging gesture of the camera, namely, the coordinate transformation matrix from the body coordinate system of the aerocar to the camera coordinate of the camera is as follows,/>Is determined by the installation position and the installation angle of the camera.
Further, the specific process of the third step is as follows:
according to the camera parameter information, determining a camera vision imaging plane, wherein the camera imaging plane is a two-dimensional plane and consists of a plurality of rows and columns of pixels, and if the imaging plane width is m pixels and the imaging plane height is n pixels, the ith row and jth column of pixels P (i, j) in the imaging plane corresponds to position coordinates under a camera coordinate system of the cameraThe calculation formula is as follows:
wherein,for imaging distance +.>For imaging plane center point width pixel position, +.>For the imaging plane center point height pixel position, dx is the imaging plane width direction 1mm pixel number, dy is the imaging plane height direction 1mm pixel number, and f is the focal length (unit mm).
Further, the specific process of the fourth step is as follows:
according to the position information of the flying car, a nearby digital three-dimensional map is imported, the digital map comprises a series of longitude and latitude sequences, the longitude and latitude positions and the corresponding heights of nearby ground features are described, the nearby digital three-dimensional map is loaded by adopting a cache, and the nearby digital three-dimensional map is continuously cached along with the movement of the flying car, so that the rapid loading and calculation of the nearby digital three-dimensional map are realized.
Further, the fifth specific process is as follows:
for each pixel point, the furthest imaging distance is setAnd update step +.>Calculating the imaging distance +.>The camera coordinates corresponding to the position are converted to the position under the inertial system coordinates, the elevation information of the position is obtained by utilizing a digital three-dimensional map, if the height of the position is smaller than the corresponding elevation information or the height of the position is smaller than 0, the position is a pixel point ground object position, the height of the position is smaller than the corresponding elevation information to indicate that the imaging sight is blocked by an obstacle, and the height of the position is smaller than 0 to indicate that the imaging sight is blocked by the ground; and then calculating depth information according to the position of the pixel point ground object.
Further, in the fifth step, the depth information calculation flow for the ith row and jth column pixels P (i, j) in the imaging plane is as follows:
step 1, setting an imaging distance0, imaging State->1, entering a cycle;
step 2, if the imaging distance is smaller than the farthest imaging distance and the imaging state is 1, updating the imaging distance, namely stepping the imaging distance by one updating step length, and if not, turning to step 9:
step 3, calculating the imaging distance of the pixel P (i, j)The position corresponds to the lower coordinate of the camera coordinate system of the camera, and the calculation formula is as follows:
step 4, pixel P (i, j) is set at the imaging distanceThe position coordinates are converted into coordinates under an inertial coordinate system, and the calculation formula is as follows:
step 5, pixel P (i, j) is set at the imaging distanceThe position coordinates are converted into longitude and latitude height forms, and the calculation formula is as follows:
step 6, if pixel P (i, j) is at the imaging distancePosition height->Less than 0, imaging state is set +.>If the value is 0, jumping out of the loop, turning to the step 9, otherwise turning to the step 7;
step 7, searching elevation information h corresponding to the longitude and latitude positions in the step 5 by using a digital three-dimensional map;
step 8, if the heightLess than the corresponding elevation information h, the imaging state is set +.>If the value is 0, jumping out of the cycle, turning to the step 9, otherwise turning to the step 2 to continue the cycle;
step 9, calculating depth information of the pixel P (i, j), wherein the calculation formula is as follows:
further, the sixth specific process is as follows:
the imaging plane has m multiplied by n pixel points, the depth information of each pixel point is circularly calculated, and corresponding m multiplied by n depth values are generated, namely the visual image depth information.
Compared with the prior art, the invention has the following technical effects:
the method can be used for determining the position of the camera pose information, the visual image imaging plane and the position of the camera corresponding to each pixel based on the three-dimensional map by utilizing the pose information and the camera parameters of the aerocar, introducing the three-dimensional digital map, and automatically calculating the depth information of each pixel point of the visual image by utilizing the corresponding elevation information, so that the visual image depth information of the aerocar is obtained, the visual image depth information can be obtained without additionally adding a ranging laser radar, the hardware complexity of the aerocar is reduced, the loading capacity of the aerocar is improved, the manufacturing cost of the aerocar is reduced, and the method can be applied to various application scenes such as accurate perception of the aerocar flight environment, three-dimensional reconstruction of the aerocar, online path planning, online emergency obstacle avoidance and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
FIG. 1 is a flow chart of the operation of the computing method of the present invention;
fig. 2 is a flowchart of calculating depth information of a single pixel point according to the present invention.
Detailed Description
An embodiment, as shown in fig. 1 and 2, is a method for calculating depth information of a visual image of a flying car based on a three-dimensional map, including the following steps:
step one, determining position and posture information of the flying automobile.
The position and attitude information of the aerocar can be determined according to a vehicle-mounted inertial navigation system and a vehicle-mounted satellite navigation system, the position and attitude information is expressed under an inertial system, and can be selected as a northeast coordinate system, the position and attitude positioning mode is in a longitude and latitude high form (lon 0, lat0 and alt 0), wherein lon0 is the position longitude of the aerocar, lat0 is the position latitude of the aerocar, and alt0 is the flight altitude of the aerocar; the attitude positioning form is an Euler angle form (yaw, pitch, roll), wherein yaw is a yaw angle, pitch is a pitch angle, roll is a roll angle, and the Euler angle coordinate conversion form adopts Z-Y-X sequential rotation conversion.
And step two, determining the position and posture information of the camera.
The camera is arranged on the position of the aerocar, the coordinates of the camera are (X, Y, Z) under the coordinate system of the aerocar body, the position of the camera is determined by converting the camera into the inertial system (X, Y, Z) of northeast, and the longitude and latitude high position (lon, lat, alt) of the camera is obtained by utilizing a distance and longitude and latitude conversion mode.
The camera position is expressed as under northeast day inertia:
wherein,down-converting the matrix from the body coordinate system to the inertial coordinate system, down-converting the matrix from the inertial coordinate system to the body coordinate system>Is transformed from inertial to bulk coordinate system down>The attitude Euler angle of the flying automobile can be calculated by the following calculation formula:
the longitude and latitude height calculation formula of the camera is as follows:
wherein,for the earth radius, the average number 6371000m may be taken.
According to the installation position and the installation angle of the camera on the aerocar, determining the imaging gesture of the camera, namely, the coordinate transformation matrix from the body coordinate system of the aerocar to the camera coordinate of the camera is as follows,/>Is determined by the installation position and the installation angle of the camera.
And thirdly, determining a camera visual image imaging plane.
According to the camera parameter information, determining a camera vision imaging plane, wherein the camera imaging plane is a two-dimensional plane and consists of a plurality of rows and columns of pixels, and if the imaging plane is m pixels in width and n pixels in height, the ith row and jth column of pixels P (i, j) in the imaging plane corresponds to position coordinates in a camera coordinate system of the cameraThe calculation formula is as follows:
wherein,for imaging distance +.>For imaging plane center point width pixel position, +.>For the imaging plane center point height pixel position, dx is the imaging plane width direction 1mm pixel number, dy is the imaging plane height direction 1mm pixel number, and f is the focal length (unit mm).
And step four, introducing a digital three-dimensional map.
According to the position information of the flying car, a nearby digital three-dimensional map is imported, the digital map comprises a series of longitude and latitude sequences, the longitude and latitude positions and the corresponding heights of nearby ground features are described, the large-scale high-precision digital three-dimensional map data size is large, the calculation speed is influenced, in order to accelerate the calculation speed, a mode of caching and loading the nearby high-precision digital three-dimensional map can be adopted, and along with the movement of the flying car, the nearby digital three-dimensional map is continuously cached and loaded, so that the rapid loading and calculation of the nearby digital three-dimensional map are realized.
And fifthly, calculating depth information of each pixel point.
For each pixel point, the furthest imaging distance is setAnd update step +.>Calculating the imaging distance +.>The camera coordinates corresponding to the position are converted to the position under the inertial system coordinates, the elevation information of the position is obtained by utilizing a digital three-dimensional map, if the height of the position is smaller than the corresponding elevation information or the height of the position is smaller than 0, the position is a pixel point ground object position, the height of the position is smaller than the corresponding elevation information to indicate that the imaging sight is blocked by an obstacle, and the height of the position is smaller than 0 to indicate that the imaging sight is blocked by the ground; and calculating depth information according to the pixel point ground feature positions.
For the ith row and jth column pixels P (i, j) in the imaging plane, the depth information calculation flow thereof is as follows:
step 1, setting an imaging distance0, imaging State->1, entering a cycle;
step 2, if the imaging distance is smaller than the farthest imaging distance and the imaging state is 1, updating the imaging distance, namely stepping the imaging distance by one updating step length, and if not, turning to step 9:
step 3, calculating the imaging distance of the pixel P (i, j)The position corresponds to the lower coordinate of the camera coordinate system of the camera, and the calculation formula is as follows:
step 4, imaging distance of the pixel P (i, j)The position coordinates are converted into coordinates under an inertial coordinate system, and the calculation formula is as follows:
step 5, imaging distance of pixel P (i, j)The position coordinates are converted into longitude and latitude height forms, and the calculation formula is as follows:
step 6, if pixel P (i, j) imaging distancePosition height->Less than 0, imaging state is set +.>If the value is 0, jumping out of the loop, turning to the step 9, otherwise turning to the step 7;
step 7, searching elevation information h corresponding to the longitude and latitude positions in the step 5 by using a digital three-dimensional map;
step 8, if the heightLess than the corresponding elevation information h, the imaging state is set +.>If the value is 0, jumping out of the cycle, turning to the step 9, otherwise turning to the step 2 to continue the cycle;
step 9, calculating depth information of the pixel P (i, j), wherein the calculation formula is as follows:
step six, generating visual image depth information: the imaging plane has m multiplied by n pixel points, the depth information of each pixel point is circularly calculated, and corresponding m multiplied by n depth values are generated, namely the visual image depth information.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (6)

1. A method for calculating the depth information of a visual image of a flying car based on a three-dimensional map is characterized by comprising the following steps of: the method comprises the following steps:
step one, determining position and posture information of a flying automobile;
step two, determining position and posture information of the camera;
step three, determining a camera visual image imaging plane;
step four, introducing a digital three-dimensional map;
step five, calculating depth information of each pixel point;
the fifth concrete process comprises the following steps:
for each pixel point, the furthest imaging distance is setAnd update step +.>Calculating the imaging distance +.>The camera coordinates corresponding to the position are converted to the position under the inertial system coordinates, the elevation information of the position is obtained by utilizing a digital three-dimensional map, if the height of the position is smaller than the corresponding elevation information or the height of the position is smaller than 0, the position is a pixel point ground object position, the height of the position is smaller than the corresponding elevation information to indicate that the imaging sight is blocked by an obstacle, and the height of the position is smaller than 0 to indicate that the imaging sight is blocked by the ground; calculating depth information according to the position of the pixel point ground object;
in the fifth step, the depth information calculation flow for the ith row and jth column pixels P (i, j) in the imaging plane is as follows:
step 1, setting an imaging distance0, imaging State->1, entering a cycle;
step 2, if the imaging distance is smaller than the farthest imaging distance and the imaging state is 1, updating the imaging distance, namely stepping the imaging distance by one updating step length, and if not, turning to step 9:
step 3, calculating the imaging distance of the pixel P (i, j)The position corresponds to the lower coordinate of the camera coordinate system of the camera, and the calculation formula is as follows:
step 4, pixel P (i, j) is set at the imaging distanceThe position coordinates are converted into coordinates under an inertial coordinate system, and the calculation formula is as follows:
step 5, pixel P (i, j) is set at the imaging distanceThe position coordinates are converted into longitude and latitude height forms, and the calculation formula is as follows:
step 6, if pixel P (i, j) is at the imaging distancePosition height->Less than 0, set imaging stateIf the value is 0, jumping out of the loop, turning to the step 9, otherwise turning to the step 7;
step 7, searching elevation information h corresponding to the longitude and latitude positions in the step 5 by using a digital three-dimensional map;
step 8, if the heightLess than the corresponding elevation information h, the imaging state is set +.>If the value is 0, jumping out of the cycle, turning to the step 9, otherwise turning to the step 2 to continue the cycle;
step 9, calculating depth information of the pixel P (i, j), wherein the calculation formula is as follows:
and step six, generating visual image depth information.
2. The method for calculating the visual image depth information of the flying car based on the three-dimensional map according to claim 1, wherein the method comprises the following steps of: the specific process of the first step is as follows:
the position and attitude information of the aerocar can be determined according to a vehicle-mounted inertial navigation system and a vehicle-mounted satellite navigation system, the position and attitude information is expressed under an inertial system, and can be selected as a northeast coordinate system, the position and attitude positioning mode is in longitude and latitude high forms lon0, lat0 and alt0, wherein lon0 is the position longitude of the aerocar, lat0 is the position latitude of the aerocar, and alt0 is the flight altitude of the aerocar; the attitude positioning form is an Euler angle form yaw, pitch, roll, wherein yaw is a yaw angle, pitch is a pitch angle, roll is a roll angle, and the Euler angle coordinate conversion form adopts Z-Y-X sequential rotation conversion.
3. The method for calculating the visual image depth information of the flying car based on the three-dimensional map according to claim 1, wherein the method comprises the following steps of: the specific process of the second step is as follows:
the camera is arranged on the position of the aerocar, the coordinates of the camera are x, y and z under the coordinate system of the aerocar body, the camera position is determined by X, Y, Z which is required to be converted under the northeast inertial system, and the longitude and latitude high position lon, lat, alt of the camera is obtained by utilizing a distance and longitude and latitude conversion mode;
the camera position is expressed as under northeast day inertia:
wherein,down-converting the matrix from the body coordinate system to the inertial coordinate system, down-converting the matrix from the inertial coordinate system to the body coordinate system>Is transformed from inertial to bulk coordinate system down>The attitude Euler angle of the flying automobile can be calculated by the following calculation formula:
the longitude and latitude height calculation formula of the camera is as follows:
wherein,for the earth radius, average 6371000m can be taken;
according to the installation position and the installation angle of the camera on the aerocar, determining the imaging gesture of the camera, namely, the coordinate transformation matrix from the body coordinate system of the aerocar to the camera coordinate of the camera is as follows,/>Is determined by the installation position and the installation angle of the camera.
4. The method for calculating the visual image depth information of the flying car based on the three-dimensional map according to claim 1, wherein the method comprises the following steps of: the third concrete process is as follows:
according to the camera parameter information, determining a camera vision imaging plane, wherein the camera imaging plane is a two-dimensional plane and consists of a plurality of rows and columns of pixels, if the imaging plane width is m pixels and the imaging plane height is n pixels, the ith row and jth column of pixels P (i, j) in the imaging plane correspond to position coordinates xc, yc and zc under a camera coordinate system of the camera, and the calculation formula is as follows:
wherein,for imaging distance +.>For imaging plane center point width pixel position, +.>For the imaging plane center point height pixel position, dx is the imaging planeThe number of pixels in the width direction is 1mm, dy is the number of pixels in the height direction of the imaging plane is 1mm, and f is the focal length in mm.
5. The method for calculating the visual image depth information of the flying car based on the three-dimensional map according to claim 1, wherein the method comprises the following steps of: the specific process of the step four is as follows:
according to the position information of the flying car, a nearby digital three-dimensional map is imported, the digital map comprises a series of longitude and latitude high sequences, the longitude and latitude positions and the corresponding heights of nearby ground features are described, and the nearby digital three-dimensional map is cached and loaded continuously along with the movement of the flying car in a mode of caching and loading the nearby digital three-dimensional map.
6. The method for calculating the visual image depth information of the flying car based on the three-dimensional map according to claim 1, wherein the method comprises the following steps of: the sixth concrete process comprises the following steps:
the imaging plane has m multiplied by n pixel points, the depth information of each pixel point is circularly calculated, and corresponding m multiplied by n depth values are generated, namely the visual image depth information.
CN202311308155.7A 2023-10-11 2023-10-11 Method for calculating depth information of visual image of aerocar based on three-dimensional map Active CN117058209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311308155.7A CN117058209B (en) 2023-10-11 2023-10-11 Method for calculating depth information of visual image of aerocar based on three-dimensional map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311308155.7A CN117058209B (en) 2023-10-11 2023-10-11 Method for calculating depth information of visual image of aerocar based on three-dimensional map

Publications (2)

Publication Number Publication Date
CN117058209A CN117058209A (en) 2023-11-14
CN117058209B true CN117058209B (en) 2024-01-23

Family

ID=88666705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311308155.7A Active CN117058209B (en) 2023-10-11 2023-10-11 Method for calculating depth information of visual image of aerocar based on three-dimensional map

Country Status (1)

Country Link
CN (1) CN117058209B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117706595B (en) * 2024-02-01 2024-05-17 山东欧龙电子科技有限公司 Combined butt joint guiding method for split type aerocar

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN108406731A (en) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 A kind of positioning device, method and robot based on deep vision
CN114387341A (en) * 2021-12-16 2022-04-22 四川腾盾科技有限公司 Method for calculating six-degree-of-freedom pose of aircraft through single aerial observation image
WO2022083038A1 (en) * 2020-10-23 2022-04-28 浙江商汤科技开发有限公司 Visual positioning method and related apparatus, device and computer-readable storage medium
WO2023030062A1 (en) * 2021-09-01 2023-03-09 中移(成都)信息通信科技有限公司 Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
WO2023104207A1 (en) * 2021-12-10 2023-06-15 深圳先进技术研究院 Collaborative three-dimensional mapping method and system
CN116429098A (en) * 2023-03-21 2023-07-14 上海机电工程研究所 Visual navigation positioning method and system for low-speed unmanned aerial vehicle
CN116753937A (en) * 2023-05-29 2023-09-15 杭州领飞科技有限公司 Unmanned plane laser radar and vision SLAM-based real-time map building fusion method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN108406731A (en) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 A kind of positioning device, method and robot based on deep vision
WO2022083038A1 (en) * 2020-10-23 2022-04-28 浙江商汤科技开发有限公司 Visual positioning method and related apparatus, device and computer-readable storage medium
WO2023030062A1 (en) * 2021-09-01 2023-03-09 中移(成都)信息通信科技有限公司 Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
WO2023104207A1 (en) * 2021-12-10 2023-06-15 深圳先进技术研究院 Collaborative three-dimensional mapping method and system
CN114387341A (en) * 2021-12-16 2022-04-22 四川腾盾科技有限公司 Method for calculating six-degree-of-freedom pose of aircraft through single aerial observation image
CN116429098A (en) * 2023-03-21 2023-07-14 上海机电工程研究所 Visual navigation positioning method and system for low-speed unmanned aerial vehicle
CN116753937A (en) * 2023-05-29 2023-09-15 杭州领飞科技有限公司 Unmanned plane laser radar and vision SLAM-based real-time map building fusion method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于360度激光扫描仪的深度图像构建;付飞蚺;徐晶;李福善;方明;赵晓军;;长春理工大学学报(自然科学版)(06);全文 *
基于RGB-D摄像机的室内三维彩色点云地图构建;赵矿军;;哈尔滨商业大学学报(自然科学版)(01);全文 *
视觉与激光点云融合的深度图像获取方法;王东敏;彭永胜;李永乐;;军事交通学院学报(10);全文 *

Also Published As

Publication number Publication date
CN117058209A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN109634304B (en) Unmanned aerial vehicle flight path planning method and device and storage medium
CN110969655B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN107014380B (en) Combined navigation method based on visual navigation and inertial navigation of aircraft
KR102295809B1 (en) Apparatus for acquisition distance for all directions of vehicle
CN113970922B (en) Point cloud data processing method, intelligent driving control method and device
CN117058209B (en) Method for calculating depth information of visual image of aerocar based on three-dimensional map
CN109631911B (en) Satellite attitude rotation information determination method based on deep learning target recognition algorithm
EP3842317B1 (en) Method of and electronic device for computing data for controlling operation of self driving car (sdc)
US20190114490A1 (en) Information processing device, learned model, information processing method, and computer program product
US11069080B1 (en) Collaborative airborne object tracking systems and methods
CN110887486B (en) Unmanned aerial vehicle visual navigation positioning method based on laser line assistance
CN111402328B (en) Pose calculation method and device based on laser odometer
US20230150518A1 (en) Calibration of sensors in autonomous vehicle applications
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN111510704A (en) Method for correcting camera dislocation and device using same
CN117253029A (en) Image matching positioning method based on deep learning and computer equipment
CN111401190A (en) Vehicle detection method, device, computer equipment and storage medium
CN112985398A (en) Target positioning method and system
CN116740681A (en) Target detection method, device, vehicle and storage medium
WO2020223868A1 (en) Terrain information processing method and apparatus, and unmanned vehicle
Qi et al. Detection and tracking of a moving target for UAV based on machine vision
US20210011490A1 (en) Flight control method, device, and machine-readable storage medium
CN109029451A (en) Small drone autonomic positioning method based on networked beacons correction
CN114136314A (en) Auxiliary attitude calculation method for aerospace vehicle
CN113312403A (en) Map acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant