CN114677483A - Three-dimensional map modeling method and device based on unmanned aerial vehicle shooting video - Google Patents

Three-dimensional map modeling method and device based on unmanned aerial vehicle shooting video Download PDF

Info

Publication number
CN114677483A
CN114677483A CN202210187826.8A CN202210187826A CN114677483A CN 114677483 A CN114677483 A CN 114677483A CN 202210187826 A CN202210187826 A CN 202210187826A CN 114677483 A CN114677483 A CN 114677483A
Authority
CN
China
Prior art keywords
image
flight
dimensional map
target area
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210187826.8A
Other languages
Chinese (zh)
Inventor
支晓栋
潘晓丽
裴小东
宋强
王丹华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cetc Yizhihang Ningxia Technology Co ltd
Original Assignee
Cetc Yizhihang Ningxia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cetc Yizhihang Ningxia Technology Co ltd filed Critical Cetc Yizhihang Ningxia Technology Co ltd
Priority to CN202210187826.8A priority Critical patent/CN114677483A/en
Publication of CN114677483A publication Critical patent/CN114677483A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18502Airborne stations
    • H04B7/18506Communications with or from aircraft, i.e. aeronautical mobile service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices

Abstract

The application provides a three-dimensional map modeling method and device based on videos shot by an unmanned aerial vehicle, video data shot by the unmanned aerial vehicle in the flight process of each flight route in multiple flight routes is received in real time, real-time splicing is carried out according to the video data aiming at each flight route, a strip image corresponding to each flight route is obtained, and an orthostatic situation image is obtained according to the strip image corresponding to each flight route in the multiple flight routes; and obtaining the current three-dimensional map of the target area according to the orthomorphic image and the historical three-dimensional map of the target area. According to the method and the device, video shooting and video image splicing are carried out synchronously, so that the speed of obtaining the orthomorphic image is increased, the target city three-dimensional map can be obtained quickly according to the orthomorphic image and the historical city three-dimensional map, time delay is avoided, and timeliness is improved.

Description

Three-dimensional map modeling method and device based on unmanned aerial vehicle shooting video
Technical Field
The application relates to the technical field of computers, in particular to a three-dimensional map modeling method and device based on videos shot by an unmanned aerial vehicle.
Background
With the development of economy, urban environments become more complex and are ever more frequent, and the reconnaissance capability of cities is greatly weakened by high-rise urban buildings and complex underground corridors, so that a three-dimensional model of the cities can be constructed for the convenience of city management and monitoring. Through city three-dimensional model, effectively promote situation perception ability in to the urban environment, know the environmental information everywhere in the city in real time, for the emergency personnel in the mixed and disorderly urban environment of department provide urban environment's accurate geographic space data.
When carrying out three-dimensional modeling to the city, need shoot the image through unmanned aerial vehicle, unmanned aerial vehicle is mainly based on conventional 2D and 3D survey and drawing to the perception and the acquisition of situation at present, and basic application utilizes unmanned aerial vehicle to carry high resolution orthographic camera, five pieces together oblique photography camera, and unmanned aerial vehicle carries out the operation according to the air route of planning, will acquire image information and send into dedicated computer workstation and professional software to handle after the land falls, reachs the achievement data, then does local update with original GIS map, obtains the three-dimensional map of city.
However, when the method is used for urban three-dimensional modeling, according to different precision requirements, the updated GIS map can be obtained by calculation time from several hours to tens of hours, and the timeliness is poor.
Disclosure of Invention
The application provides a three-dimensional map modeling method and device based on videos shot by an unmanned aerial vehicle, which can quickly obtain a target city three-dimensional map according to video data shot by the unmanned aerial vehicle and a historical city three-dimensional map, so that time delay is avoided, and timeliness is improved.
In a first aspect, the application provides a three-dimensional map modeling method based on videos shot by an unmanned aerial vehicle, comprising:
receiving video data shot by the unmanned aerial vehicle in the flight process of each flight route in a plurality of sections of flight routes in real time, wherein the plurality of sections of flight routes are routes flying by the unmanned aerial vehicle in a target area;
splicing the sections of flight routes in real time according to the video data to obtain strip images corresponding to the sections of flight routes, wherein the strip images are images within a preset distance range of the flight routes;
acquiring an orthostatic situation image according to the strip image corresponding to each flight route in the multiple sections of flight routes;
and obtaining the current three-dimensional map of the target area according to the orthostatic situation image and the historical three-dimensional map of the target area.
Optionally, the method further includes:
The method comprises the steps that in the process of receiving video data shot by an unmanned aerial vehicle in the flying process of each flight route in a plurality of sections of flight routes in real time, flight attitude data sent by the unmanned aerial vehicle when the video data are shot are received in real time;
the real-time splicing is carried out on each flight route according to the video data to obtain a strip image corresponding to each flight route, and the method comprises the following steps:
performing frame extraction on the video data to obtain a plurality of images to be spliced;
acquiring flight attitude data of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the images to be spliced from the flight attitude data according to each image to be spliced;
and splicing the multiple spliced images in real time according to each image to be spliced and the flight attitude data corresponding to the image to be spliced to obtain a strip image corresponding to each flight path.
Optionally, the splicing the multiple spliced images in real time according to each image to be spliced and the flight attitude data corresponding to the image to be spliced to obtain a strip image corresponding to each flight path includes:
and splicing the multiple spliced images in real time by adopting at least one method of an area jigsaw, a single jigsaw, a hovering jigsaw and a route jigsaw according to each spliced image and the flight attitude data corresponding to the spliced image to obtain a strip image corresponding to each flight route.
Optionally, the obtaining a current three-dimensional map of the target area according to the orthomorphic posture image and the historical three-dimensional map of the target area includes:
establishing a projection relation between a space point in an irregular triangulation network model and the orthomorphic potential image through a computer vision positioning method in oblique photogrammetry;
according to the projection relation between the space point in the irregular triangulation network model and the orthomorphic situation image, the projection relation between each gray triangulation network in the irregular triangulation network model and the image texture of the orthomorphic situation image is established;
mapping the texture information of the orthostatic situation image to a Digital Elevation Model (DEM) according to the projection relation between each gray-scale triangulation network in the irregular triangulation network Model and the image texture of the orthostatic situation image, and obtaining a three-dimensional map of a target area corresponding to the orthostatic situation image;
and superposing the three-dimensional map of the target area corresponding to the orthomorphic image to the historical three-dimensional map of the target area, and updating the historical three-dimensional map of the target area to obtain the three-dimensional map of the target area.
Optionally, before the mapping the texture information of the ortho-situation image to the digital elevation model DEM to obtain the three-dimensional map of the target area corresponding to the ortho-situation image, the method further includes:
performing geometric correction on the texture information of the orthomorphic image to obtain corrected texture information of the orthomorphic image;
the mapping of the texture information of the ortho-situation image to a digital elevation model DEM to obtain a three-dimensional map of a target area corresponding to the ortho-situation image comprises the following steps:
and mapping the corrected texture information of the ortho-situation image to a Digital Elevation Model (DEM) to obtain a three-dimensional map of a target area corresponding to the ortho-situation image.
Optionally, the splicing, according to the video data, of each flight path in real time to obtain a strip image corresponding to each flight path further includes:
and saving the strip image corresponding to each flight route according to the time sequence of obtaining the strip image corresponding to each flight route.
In a second aspect, the present application provides a three-dimensional map modeling device based on video is shot to unmanned aerial vehicle, includes:
The receiving module is used for receiving video data shot by the unmanned aerial vehicle in the flight process of each flight route in a plurality of flight routes in real time, wherein the plurality of flight routes are flight routes of the unmanned aerial vehicle in a target area;
the first splicing module is used for splicing each flight route in real time according to the video data to obtain a strip image corresponding to each flight route, wherein the strip image is an image in a preset distance range of the flight route;
the second splicing module is used for obtaining an orthostatic situation image according to the strip image corresponding to each flight path in the plurality of flight paths;
and the updating module is used for obtaining the current three-dimensional map of the target area according to the orthomorphic image and the historical three-dimensional map of the target area.
Optionally, the receiving module is further configured to receive, in real time, the flight attitude data sent by the unmanned aerial vehicle when shooting the video data in the process of receiving, in real time, the video data shot by the unmanned aerial vehicle in the flight process of each flight route of the multiple flight routes;
correspondingly, the first splicing module performs real-time splicing on each flight route according to the video data to obtain a strip image corresponding to each flight route, and is specifically configured to:
Performing frame extraction on the video data to obtain a plurality of images to be spliced;
acquiring flight attitude data of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the images to be spliced from the flight attitude data according to each image to be spliced;
and splicing the multiple spliced images in real time according to each image to be spliced and the flight attitude data corresponding to the image to be spliced to obtain a strip image corresponding to each flight path.
Optionally, the first stitching module performs real-time stitching on the multiple spliced images according to each image to be stitched and flight attitude data corresponding to the image to be stitched, to obtain a strip image corresponding to each flight route, and is specifically configured to:
and splicing the multiple spliced images in real time by adopting at least one method of an area jigsaw, a single jigsaw, a hovering jigsaw and a route jigsaw according to each spliced image and the flight attitude data corresponding to the spliced image to obtain a strip image corresponding to each flight route.
Optionally, the updating module obtains a current three-dimensional map of the target area according to the ortho-situation image and the historical three-dimensional map of the target area, and is specifically configured to:
Establishing a projection relation between a space point in an irregular triangulation network model and the orthomorphic image by a computer vision positioning method in oblique photogrammetry;
according to the projection relationship between the space point in the irregular triangulation network model and the ortho-situation image, the projection relationship between each gray triangulation network in the irregular triangulation network model and the image texture of the ortho-situation image is established;
mapping the texture information of the orthostatic situation image to a digital elevation model DEM according to the projection relation between each gray-scale triangulation network in the irregular triangulation network model and the image texture of the orthostatic situation image to obtain a three-dimensional map of a target area corresponding to the orthostatic situation image;
and superposing the three-dimensional map of the target area corresponding to the orthomorphic image to the historical three-dimensional map of the target area, and updating the historical three-dimensional map of the target area to obtain the three-dimensional map of the target area.
Optionally, the updating module maps the texture information of the ortho-situation image to a digital elevation model DEM, and before obtaining a three-dimensional map of a target area corresponding to the ortho-situation image, the updating module is further configured to:
Performing geometric correction on the texture information of the orthomorphic image to obtain corrected texture information of the orthomorphic image;
mapping the texture information of the ortho-situation image to a Digital Elevation Model (DEM) to obtain a three-dimensional map of a target area corresponding to the ortho-situation image, wherein the mapping comprises the following steps:
and mapping the corrected texture information of the ortho-situation image to a Digital Elevation Model (DEM) to obtain a three-dimensional map of a target area corresponding to the ortho-situation image.
Optionally, the apparatus further comprises: a storage module;
the storage module is configured to, after the first splicing module performs real-time splicing on each flight route according to the video data to obtain a strip image corresponding to each flight route, store the strip image corresponding to each flight route according to a time sequence of obtaining the strip image corresponding to each flight route.
In a third aspect, the present application provides an electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the method of any of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer-executable instruction or a program is stored, and when the computer-executable instruction or the program is executed by a processor, the method according to any one of the first aspect is implemented.
In a fifth aspect, the present application provides a computer program, wherein the computer program is characterized in that when being executed by a processor, the computer program implements the method according to any one of the first aspect.
The three-dimensional map modeling method and device based on videos shot by the unmanned aerial vehicle receive video data shot by the unmanned aerial vehicle in the flight process of each flight route in multiple flight routes in real time, and are spliced in real time according to the video data for each flight route to obtain a strip image corresponding to each flight route, and an orthostatic image is obtained according to the strip image corresponding to each flight route in the multiple flight routes; and obtaining the current three-dimensional map of the target area according to the orthomorphic image and the historical three-dimensional map of the target area. Therefore, video shooting and video image splicing are carried out synchronously, the speed of obtaining the ortho-situation image is increased, the target city three-dimensional map can be obtained quickly according to the ortho-situation image and the historical city three-dimensional map, time delay is avoided, and timeliness is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a three-dimensional map modeling method based on videos shot by an unmanned aerial vehicle according to an embodiment of the present application;
fig. 2 is a schematic view of a flight path of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 3 is a diagram of a video data transmission link of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a three-dimensional map modeling apparatus based on videos shot by an unmanned aerial vehicle according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application are clearly and completely described below, and it is obvious that the described embodiments are a part of the embodiments of the present application, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the development of economy, the urban scale is larger and larger, the urban environment is more complex and is changeable, and the traditional urban monitoring management method cannot meet the requirement of current urban monitoring management, especially when an emergency such as a fire occurs in a city, the field condition and the urban traffic condition need to be known, so that a three-dimensional model of the city can be constructed in order to know the environmental information of each part of the city in time and facilitate the urban management and monitoring. Through city three-dimensional model, effectively promote situation perception ability in to the urban environment, know the environmental information everywhere in the city in real time, for the emergency personnel in the mixed and disorderly urban environment of department provide urban environment's accurate geographic space data.
Unmanned aerial vehicle has aerial work's advantage, and the machine carries the camera and is dynamic, three-dimensional various visual angles, can expand rapidly after arriving the scene, can also multi-angle, on-the-spot observation on a large scale. Therefore, the city image is shot through the unmanned aerial vehicle, and the three-dimensional modeling is carried out on the city. In the prior art, the perception and acquisition of the situation of the unmanned aerial vehicle are mainly based on conventional 2D and 3D mapping, the basic application is that the unmanned aerial vehicle is used for mounting a high-resolution orthographic camera and a five-spelling oblique photography camera, the unmanned aerial vehicle operates according to a planned air route, acquired image information is sent into a special computer workstation and professional software to be processed after the unmanned aerial vehicle lands on the ground, achievement data is obtained, then an original GIS map is locally updated, and a three-dimensional map of a city is obtained.
However, when the method is used for urban three-dimensional modeling, according to different precision requirements, the updated GIS map can be obtained by calculation time of several hours to tens of hours, so that the obtained GIS map is an urban map of several hours to tens of hours, and has poor timeliness and referential performance.
Therefore, in order to solve the technical problems in the prior art, the application provides a three-dimensional map modeling method and device based on videos shot by an unmanned aerial vehicle, the videos are shot by the unmanned aerial vehicle, the videos shot by the unmanned aerial vehicle are transmitted to an image processing device in real time, and the image processing device carries out image splicing processing on the videos in real time after receiving the videos shot by the unmanned aerial vehicle, so that an orthomorphic image can be obtained after the unmanned aerial vehicle finishes shooting, and therefore video shooting and video image splicing are carried out synchronously, the speed of obtaining the orthomorphic image is increased, a target city three-dimensional map can be obtained quickly according to the orthomorphic image and a historical city three-dimensional map, time delay is avoided, and timeliness is improved.
Fig. 1 is a three-dimensional modeling method based on videos shot by an unmanned aerial vehicle according to an embodiment of the present application. As shown in fig. 1, an execution subject of the method described in the present application is, for example, a device with an image processing function, for example, a graphics processor GPU, and the method described in the present application includes:
S101, video data shot by the unmanned aerial vehicle in the flight process of each flight route in a plurality of flight routes is received in real time.
The multiple flight routes are flight routes of the unmanned aerial vehicle flying in the target area.
In this embodiment, for the target area, as shown in fig. 2, a plurality of flight routes are planned according to the environmental characteristics and modeling requirements of the target area, and the unmanned aerial vehicle shoots video data according to the flight routes.
When the unmanned aerial vehicle flies on a planned flight route, the unmanned aerial vehicle shoots a video of a target area, and when the unmanned aerial vehicle shoots the video of the target area, shot video data are synchronously transmitted to the image processing equipment in real time.
Optionally, as shown in fig. 3, the unmanned aerial vehicle 31 sends the shot video data to the ground station 32 by building a relay image transmission link at the onboard end of the unmanned aerial vehicle, and the ground station 32 sends the video data to the server through the wireless network built by the base station 33, so that high-altitude image transmission relay can be performed in a safe area, and the problem of blocking of an urban link is avoided.
Optionally, in order to further increase the speed of obtaining the urban three-dimensional modeling, multiple machines may cooperate, that is, multiple unmanned machines are used, for example, as shown in fig. 2, one unmanned machine is arranged on each flight route, and each unmanned machine shoots video data of the surrounding environment of the flight route according to the flight route and synchronously sends the video data to the image processing device in real time.
And S102, splicing in real time according to the video data for each flight route to obtain a strip image corresponding to each flight route.
The strip image is an image within a preset distance range of the flight path.
In this embodiment, the video data of unmanned aerial vehicle real-time transmission that image processing equipment received, according to video data and when shooing video data unmanned aerial vehicle's flight attitude data, carry out real-time concatenation to video data, obtain the strip image that every section flight route corresponds.
When the unmanned family flies on the flight route, the shot video data are the video data within the preset distance range of the flight route, and therefore the images within the preset distance range of the flight route are obtained through splicing according to the video data.
It should be noted that the stripe images obtained from the video data captured by the drones on the adjacent flight routes may have coincident images.
Optionally, the method further includes:
s105, in the process of receiving video data shot by the unmanned aerial vehicle in the flight process of each flight route in a plurality of flight routes in real time, receiving flight attitude data sent by the unmanned aerial vehicle when the video data are shot in real time.
Specifically, when unmanned aerial vehicle flies on the flight route, the flight attitude can be changed, and like this, same position, the video data of shooting are different under the different flight attitude, and consequently, unmanned aerial vehicle carries out synchronous transmission when transmitting the video to image processing equipment in real time, unmanned aerial vehicle's flight attitude data when will shooting video data simultaneously.
Accordingly, one possible implementation of S102 includes S1021-S1023:
and S1021, performing frame extraction on the video data to obtain a plurality of images to be spliced.
And carrying the flight attitude data of the unmanned aerial vehicle corresponding to the spliced image on the image to be spliced.
Specifically, the image processing device receives video data, and splices images when splicing the images, so that frames of the received video data are extracted according to a time sequence to obtain a plurality of images to be spliced. And each image to be spliced carries the flight attitude data of the unmanned aerial vehicle corresponding to the image to be spliced.
And S1022, acquiring flight attitude data of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the images to be spliced from the flight attitude data according to each image to be spliced.
Specifically, when the image to be spliced is obtained, the time point of the image to be spliced in the video data shot by the unmanned aerial vehicle can be obtained, and the flight attitude data of the unmanned aerial vehicle when the image to be spliced is shot is obtained from the flight attitude data according to the time point.
And S1023, splicing the multiple spliced images in real time according to each image to be spliced and the flight attitude data corresponding to the image to be spliced to obtain a strip image corresponding to each flight path.
Specifically, according to the flight attitude data of the unmanned aerial vehicle corresponding to each image to be spliced, the shooting angle, the shooting height and other information of the image to be spliced can be obtained, and the plurality of images to be spliced are subjected to real-time picture splicing according to the shooting angle, the shooting height and other information.
Optionally, when performing real-time jigsaw puzzle on a plurality of images to be spliced according to each image to be spliced and flight attitude data of the unmanned aerial vehicle corresponding to each image to be spliced, at least one of an area jigsaw puzzle, a single jigsaw puzzle, a hovering jigsaw puzzle and a route jigsaw puzzle may be adopted to obtain a strip image corresponding to each flight route.
Wherein, the regional jigsaw is: after the unmanned aerial vehicle is lifted off, the unmanned aerial vehicle carries out jigsaw flight in a designated area according to the preset speed and the air route grid, and the image processing equipment automatically enters a jigsaw state according to the flight attitude data. When the unmanned aerial vehicle carries out jigsaw flight, the obtained video data have certain image overlapping. The unmanned aerial vehicle transmits shot video data back to the ground station in real time, and the image processing equipment splices the images to be spliced obtained by frame extraction. Specifically, unmanned aerial vehicle takes out the frame with the image of appointed region in the video data, acquire and treat the concatenation image, then will treat that the concatenation image fuses with the geographical position that the moment that obtains this and treat the concatenation image in the image corresponds, then carry out the characteristic through treating many and treat the concatenation image, because video data has certain image overlap, consequently, have the same or similar characteristic in the characteristic that draws from many treating the concatenation image, through carrying out quick matching to the characteristic that draws from many treating the concatenation image, directional calculation etc. uses the quick concatenation of space three point cloud carrying out the image, obtain the picture arragement result. Meanwhile, the jigsaw result is stored in the directory created according to the jigsaw starting time, and offline playback analysis is convenient to perform at the later stage.
The single jigsaw puzzle can be spliced at any time in the flight process of the unmanned aerial vehicle. Wherein, unmanned aerial vehicle records the longitude and latitude, unmanned aerial vehicle gesture data etc. of shooting constantly in real time clapping, and unmanned aerial vehicle gesture data includes: the unmanned aerial vehicle transmits one frame of image data in the video data obtained by shooting back to the image processing equipment in real time, and the image processing equipment converts the received image data into corresponding positions and angles of the GIS map and carries out jigsaw puzzle on the map according to the longitude and latitude at the shooting moment, the attitude data of the unmanned aerial vehicle and the three-dimensional data in the existing GIS map. The method for obtaining the three-dimensional map of the target city through the single jigsaw puzzle is used for the inspection of the key positions of long-distance linear areas of the city and is suitable for daily inspection work.
The hovering jigsaw puzzle is that the unmanned aerial vehicle hovers at a position in the air, the holder is adjusted to shoot a specific target in a city at a proper angle, and video data are transmitted back to the image processing equipment in real time. And the image processing equipment simultaneously extracts frames according to the video data to obtain images to be spliced, calculates the mapping coordinates of the images to be spliced according to the longitude and latitude at the photographing moment, the attitude data of the unmanned aerial vehicle and the holder angle, and splices the images to be spliced to the GIS map. The hovering jigsaw puzzle is suitable for observing a fixed point position on a time axis, and provides a certain basis for adjusting a next plan. The generated jigsaw result can be arranged and spliced according to the time sequence, and the urban environment can be visually checked after the jigsaw result is opened.
The air route jigsaw puzzle is an air route jigsaw puzzle carried out by the image processing equipment according to the flight air route of the unmanned aerial vehicle. The unmanned aerial vehicle flies according to a specific air route, video data are transmitted to image processing equipment on the ground in the flying process, the image processing equipment calculates the obtained video data according to a front-back frame sequence orientation method, only the video data of the next frame and the relative attitude data of the unmanned aerial vehicle of the previous frame need to be calculated, and the like, so that an image map in the flying process is obtained and is projected onto a three-dimensional map of a target city. The air route jigsaw is simultaneously suitable for the inspection of the unmanned aerial vehicle of the pipeline. And similarly, the jigsaw result is stored in a directory created according to the jigsaw starting time, so that offline playback analysis can be conveniently carried out at a later stage.
S103, obtaining an orthostatic situation image according to the strip image corresponding to each flight path in the multiple flight paths.
In this embodiment, in the process of obtaining the strip image of each flight route, the image processing device splices the strip images of the adjacent flight routes in real time based on the currently obtained strip images to obtain an orthostatic image, that is, obtaining the strip image of each flight route and obtaining the orthostatic image according to the strip images corresponding to the multiple flight routes may be performed synchronously.
And S104, obtaining a three-dimensional map of the target area according to the orthostatic situation image and the historical three-dimensional map of the target area.
In this embodiment, the historical three-dimensional map of the target area may be downloaded from a website, or obtained by the three-dimensional modeling map method shown in this application before the current time point, for example.
And updating the historical three-dimensional map of the target area according to the orthostatic image to obtain the three-dimensional map of the target area.
One possible implementation of S104 includes S1041-S1044:
s1041, establishing a projection relation between a space point and an orthomorphic image in the irregular triangulation network model by a computer vision positioning method in oblique photogrammetry.
Specifically, the spatial point and the orthomorphic image in the irregular triangulation network model share the same set of exterior orientation elements (position plus posture), so that the projection consistency between the spatial point and the orthomorphic image is ensured.
S1042, according to the projection relation between the space point and the ortho-situation image in the irregular triangulation network model, the projection relation between each gray triangulation network in the irregular triangulation network model and the image texture of the ortho-situation image is established.
Specifically, according to the flight attitude data of the unmanned aerial vehicle, the space coordinate information of each point in the orthostatic image is obtained, and then the space point in the irregular triangulation network model is matched with each point in the orthostatic image, so that the projection relation of each gray triangulation network in the irregular triangulation network model and the image texture of the orthostatic image is obtained.
S1043, according to the projection relation between each gray-scale triangulation network in the irregular triangulation network Model and the image texture of the ortho-situation image, mapping the texture information of the ortho-situation image into a Digital Elevation Model (DEM) to obtain a three-dimensional map of the target area corresponding to the ortho-situation image.
Specifically, the image space coordinates of the spatial points in the irregular triangulation network in the image are calculated reversely based on the collinear equation, and due to the fact that the projection relation exists between each gray-scale triangulation network in the irregular triangulation network model and the image texture of the orthomorphism image, the texture coordinates corresponding to the orthomorphism image are obtained through the image space coordinates of the spatial points in the irregular triangulation network in the image. And traversing texture coordinates corresponding to the ortho-situation image, so as to realize the fusion of the ortho-situation image and the three-dimensional map of the target area, and obtain the three-dimensional map of the target area corresponding to the ortho-situation image.
Before S1043, geometric correction needs to be performed on the texture information of the ortho-situation image, so that the finally obtained three-dimensional map of the target area corresponding to the ortho-situation image is closer to the environment information of the actual target area.
And S1044, overlaying the three-dimensional map of the target area corresponding to the orthomorphic image onto the historical three-dimensional map of the target area, and updating the historical three-dimensional map of the target area to obtain the three-dimensional map of the target area.
Specifically, according to the spatial coordinate relationship between the three-dimensional map of the target area corresponding to the ortho-situation image and each point on the historical three-dimensional map of the target area, the point on the three-dimensional map of the target area corresponding to the ortho-situation image is mapped to the point on the historical three-dimensional map of the target area, which is matched with the point, and the environmental information of the point is updated to obtain the three-dimensional map of the target area.
In the embodiment, video data shot by the unmanned aerial vehicle in the flight process of each flight route in multiple flight routes is received in real time, real-time splicing is carried out on each flight route according to the video data to obtain a strip image corresponding to each flight route, and an orthostatic situation image is obtained according to the strip image corresponding to each flight route in the multiple flight routes; and obtaining the current three-dimensional map of the target area according to the orthomorphic image and the historical three-dimensional map of the target area. Therefore, video shooting and video image splicing are synchronously performed, the speed of obtaining the orthomorphic image is increased, the target city three-dimensional map can be quickly obtained according to the orthomorphic image and the historical city three-dimensional map, time delay is avoided, and timeliness is improved.
Optionally, after S104, the present application further includes:
and S105, sending the three-dimensional map of the target area to the terminal equipment.
Specifically, when an emergency occurs in an area, the front staff needs to know detailed information of the time field in time, for example, the fire at each position of a fire scene, and whether people are trapped at each position, so that after the three-dimensional map of the target area is obtained, the three-dimensional map of the target area is sent to the front staff in time, and the front staff can make a work plan according to the detailed information of the field in time.
Optionally, in this application, unmanned aerial vehicle can also carry the cloud platform nacelle to realize tracking + locate function to the target of more remote. The airborne end locks the target through the visual tracking algorithm, and meanwhile, the laser sensor of the nacelle can measure the target distance. The unmanned aerial vehicle transmits the distance from the target and the POS of the unmanned aerial vehicle and the pod in real time, so that the longitude and latitude of the target can be calculated according to the distance from the target and the POS information of the unmanned aerial vehicle and the pod, and the target is superposed to the orthomorphic image. In this way, tracking of the target is facilitated.
Fig. 4 is a schematic structural diagram of a three-dimensional map modeling device based on videos shot by an unmanned aerial vehicle according to an embodiment of the present application. As shown in fig. 4, the three-dimensional map modeling apparatus based on the video shot by the unmanned aerial vehicle includes: a receiving module 41, a first splicing module 42, a second splicing module 43 and an updating module 44. Optionally, the three-dimensional map modeling apparatus based on the video shot by the unmanned aerial vehicle may further include: a save module 45.
The receiving module is used for receiving video data shot by the unmanned aerial vehicle in the flight process of each flight route in a plurality of flight routes in real time, wherein the flight routes are flight routes of the unmanned aerial vehicle in a target area;
the first splicing module is used for splicing each flight route in real time according to the video data to obtain a strip image corresponding to each flight route, wherein the strip image is an image in a preset distance range of the flight route;
the second splicing module is used for obtaining an orthostatic situation image according to the strip image corresponding to each flight route in the multiple flight routes;
and the updating module is used for obtaining the current three-dimensional map of the target area according to the orthostatic situation image and the historical three-dimensional map of the target area.
Optionally, the receiving module is further configured to receive, in real time, the flight attitude data sent by the unmanned aerial vehicle when shooting the video data in the process of receiving, in real time, the video data shot by the unmanned aerial vehicle in the flight process of each flight route of the plurality of flight routes;
correspondingly, the first splicing module performs real-time splicing on each flight route according to the video data to obtain a strip image corresponding to each flight route, and is specifically configured to:
Performing frame extraction on the video data to obtain a plurality of images to be spliced;
acquiring flight attitude data of the unmanned aerial vehicle when shooting the images to be spliced from the flight attitude data according to each image to be spliced;
and splicing the multiple spliced images in real time according to each image to be spliced and the flight attitude data corresponding to the image to be spliced to obtain a strip image corresponding to each flight path.
Optionally, the first stitching module performs real-time stitching on the multiple spliced images according to each image to be stitched and flight attitude data corresponding to the image to be stitched, to obtain a strip image corresponding to each flight route, and is specifically configured to:
and splicing the multiple spliced images in real time by adopting at least one method of an area jigsaw, a single jigsaw, a hovering jigsaw and a route jigsaw according to each spliced image and the flight attitude data corresponding to the spliced image to obtain a strip image corresponding to each flight route.
Optionally, the updating module obtains a current three-dimensional map of the target area according to the ortho-situation image and the historical three-dimensional map of the target area, and is specifically configured to:
Establishing a projection relation between a space point in an irregular triangulation network model and the orthomorphic potential image through a computer vision positioning method in oblique photogrammetry;
according to the projection relation between the space point in the irregular triangulation network model and the orthomorphic situation image, the projection relation between each gray triangulation network in the irregular triangulation network model and the image texture of the orthomorphic situation image is established;
mapping the texture information of the orthostatic situation image to a digital elevation model DEM according to the projection relation between each gray-scale triangulation network in the irregular triangulation network model and the image texture of the orthostatic situation image to obtain a three-dimensional map of a target area corresponding to the orthostatic situation image;
and overlaying the three-dimensional map of the target area corresponding to the orthomorphic situation image onto the historical three-dimensional map of the target area, and updating the historical three-dimensional map of the target area to obtain the three-dimensional map of the target area.
Optionally, the updating module maps the texture information of the ortho-situation image to a digital elevation model DEM, and before obtaining a three-dimensional map of a target area corresponding to the ortho-situation image, the updating module is further configured to:
Performing geometric correction on the texture information of the orthomorphic image to obtain corrected texture information of the orthomorphic image;
the mapping of the texture information of the ortho-situation image to a digital elevation model DEM to obtain a three-dimensional map of a target area corresponding to the ortho-situation image comprises the following steps:
and mapping the corrected texture information of the ortho-situation image to a Digital Elevation Model (DEM) to obtain a three-dimensional map of a target area corresponding to the ortho-situation image.
Optionally, the apparatus further comprises: a storage module;
the storage module is configured to, after the first splicing module performs real-time splicing on each flight route according to the video data to obtain a strip image corresponding to each flight route, store the strip image corresponding to each flight route according to a time sequence of obtaining the strip image corresponding to each flight route.
The three-dimensional map modeling device based on unmanned aerial vehicle shooting video that this application embodiment provided, its concrete implementation process can refer to above-mentioned method embodiment, and its theory of realization and technological effect are similar, and this embodiment is no longer repeated here.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic apparatus includes: a processor 501 and a memory 502.
Wherein the memory 502 stores computer-executable instructions.
The processor 501 executes the computer executable instructions stored in the memory 502 to cause the processor 501 to perform the method according to any of the embodiments described above.
For the electronic device provided in the embodiment of the present application, specific implementation processes of the electronic device may refer to the method embodiments, and implementation principles and technical effects of the electronic device are similar, which are not described herein again.
In the embodiment shown in fig. 5, it is understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise high speed RAM memory, and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The embodiment of the present application further provides a computer-readable storage medium, where a computer-executable instruction is stored in the computer-readable storage medium, and when a processor executes the computer-executable instruction, the video stream format conversion method according to the embodiment of the foregoing method is implemented.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A three-dimensional map modeling method based on videos shot by an unmanned aerial vehicle is characterized by comprising the following steps:
receiving video data shot by the unmanned aerial vehicle in the flight process of each flight route in a plurality of sections of flight routes in real time, wherein the plurality of sections of flight routes are routes flying by the unmanned aerial vehicle in a target area;
splicing the sections of flight routes in real time according to the video data to obtain strip images corresponding to the sections of flight routes, wherein the strip images are images within a preset distance range of the flight routes;
acquiring an orthostatic situation image according to the strip image corresponding to each flight route in the multiple sections of flight routes;
and obtaining the current three-dimensional map of the target area according to the orthostatic situation image and the historical three-dimensional map of the target area.
2. The method of claim 1, further comprising:
the method comprises the steps that in the process of receiving video data shot by an unmanned aerial vehicle in the flying process of each flight route in a plurality of sections of flight routes in real time, flight attitude data sent by the unmanned aerial vehicle when the video data are shot are received in real time;
The real-time splicing is carried out on each flight route according to the video data to obtain a strip image corresponding to each flight route, and the method comprises the following steps:
performing frame extraction on the video data to obtain a plurality of images to be spliced;
acquiring flight attitude data of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the images to be spliced from the flight attitude data according to each image to be spliced;
and splicing the multiple spliced images in real time according to each image to be spliced and the flight attitude data corresponding to the image to be spliced to obtain a strip image corresponding to each flight path.
3. The method according to claim 2, wherein the splicing the multiple band spliced images in real time according to each image to be spliced and flight attitude data corresponding to the image to be spliced to obtain a band image corresponding to each flight path comprises:
and splicing the multiple spliced images in real time by adopting at least one method of an area jigsaw, a single jigsaw, a hovering jigsaw and a route jigsaw according to each spliced image and the flight attitude data corresponding to the spliced image to obtain a strip image corresponding to each flight route.
4. The method according to any one of claims 1-3, wherein the obtaining a current three-dimensional map of the target area from the orthonormal posture image and a historical three-dimensional map of the target area comprises:
establishing a projection relation between a space point in an irregular triangulation network model and the orthomorphic potential image through a computer vision positioning method in oblique photogrammetry;
according to the projection relation between the space point in the irregular triangulation network model and the orthomorphic situation image, the projection relation between each gray triangulation network in the irregular triangulation network model and the image texture of the orthomorphic situation image is established;
mapping the texture information of the orthomorphic situation image to a digital elevation model DEM according to the projection relation between each gray-scale triangulation network in the irregular triangulation network model and the image texture of the orthomorphic situation image to obtain a three-dimensional map of a target area corresponding to the orthomorphic situation image;
and overlaying the three-dimensional map of the target area corresponding to the orthomorphic situation image onto the historical three-dimensional map of the target area, and updating the historical three-dimensional map of the target area to obtain the three-dimensional map of the target area.
5. The method according to claim 4, wherein before mapping the texture information of the ortho-situation image into a Digital Elevation Model (DEM) and obtaining a three-dimensional map of a target area corresponding to the ortho-situation image, the method further comprises:
performing geometric correction on the texture information of the orthomorphic image to obtain corrected texture information of the orthomorphic image;
mapping the texture information of the ortho-situation image to a Digital Elevation Model (DEM) to obtain a three-dimensional map of a target area corresponding to the ortho-situation image, wherein the mapping comprises the following steps:
and mapping the corrected texture information of the ortho-situation image to a Digital Elevation Model (DEM) to obtain a three-dimensional map of a target area corresponding to the ortho-situation image.
6. The method according to any one of claims 1 to 5, wherein the performing real-time stitching according to the video data for each flight route to obtain a strip image corresponding to each flight route further comprises:
and saving the strip image corresponding to each flight route according to the time sequence of obtaining the strip image corresponding to each flight route.
7. The utility model provides a three-dimensional map modeling device based on video is shot to unmanned aerial vehicle which characterized in that includes:
the receiving module is used for receiving video data shot by the unmanned aerial vehicle in the flight process of each flight route in a plurality of flight routes in real time, wherein the plurality of flight routes are flight routes of the unmanned aerial vehicle in a target area;
the first splicing module is used for splicing each flight route in real time according to the video data to obtain a strip image corresponding to each flight route, wherein the strip image is an image in a preset distance range of the flight route;
the second splicing module is used for obtaining an orthostatic situation image according to the strip image corresponding to each flight route in the multiple flight routes;
and the updating module is used for obtaining the current three-dimensional map of the target area according to the orthostatic situation image and the historical three-dimensional map of the target area.
8. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the method of any of claims 1-6.
9. A computer-readable storage medium, in which a computer-executable instruction or program is stored which, when executed by a processor, implements the method of any one of claims 1-6.
10. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any one of claims 1-6.
CN202210187826.8A 2022-02-28 2022-02-28 Three-dimensional map modeling method and device based on unmanned aerial vehicle shooting video Pending CN114677483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210187826.8A CN114677483A (en) 2022-02-28 2022-02-28 Three-dimensional map modeling method and device based on unmanned aerial vehicle shooting video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210187826.8A CN114677483A (en) 2022-02-28 2022-02-28 Three-dimensional map modeling method and device based on unmanned aerial vehicle shooting video

Publications (1)

Publication Number Publication Date
CN114677483A true CN114677483A (en) 2022-06-28

Family

ID=82071482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210187826.8A Pending CN114677483A (en) 2022-02-28 2022-02-28 Three-dimensional map modeling method and device based on unmanned aerial vehicle shooting video

Country Status (1)

Country Link
CN (1) CN114677483A (en)

Similar Documents

Publication Publication Date Title
CN107356230B (en) Digital mapping method and system based on live-action three-dimensional model
US7528938B2 (en) Geospatial image change detecting system and associated methods
CA2573318C (en) Geospatial image change detecting system with environmental enhancement and associated methods
US8433457B2 (en) Environmental condition detecting system using geospatial images and associated methods
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
US20070162193A1 (en) Accuracy enhancing system for geospatial collection value of an image sensor aboard an airborne platform and associated methods
Grenzdörffer et al. Photogrammetric image acquisition and image analysis of oblique imagery
CN108154558B (en) Augmented reality method, device and system
CN106683038B (en) Method and device for generating fire situation map
CN104118561B (en) Method for monitoring large endangered wild animals based on unmanned aerial vehicle technology
CN112634370A (en) Unmanned aerial vehicle dotting method, device, equipment and storage medium
CN104457704A (en) System and method for positioning ground targets of unmanned planes based on enhanced geographic information
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
CN112184890A (en) Camera accurate positioning method applied to electronic map and processing terminal
CN106683039B (en) System for generating fire situation map
JP6616967B2 (en) Map creation apparatus and map creation method
CN115641401A (en) Construction method and related device of three-dimensional live-action model
Hein et al. An integrated rapid mapping system for disaster management
CN112013830A (en) Accurate positioning method for unmanned aerial vehicle inspection image detection defects of power transmission line
KR20160082886A (en) Method and system for mapping using UAV and multi-sensor
CN114549595A (en) Data processing method and device, electronic equipment and storage medium
CN109712249B (en) Geographic element augmented reality method and device
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
JP2000050156A (en) News supporting system
CN114882201A (en) Real-time panoramic three-dimensional digital construction site map supervision system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination