CN110675484A - Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera - Google Patents

Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera Download PDF

Info

Publication number
CN110675484A
CN110675484A CN201910792361.7A CN201910792361A CN110675484A CN 110675484 A CN110675484 A CN 110675484A CN 201910792361 A CN201910792361 A CN 201910792361A CN 110675484 A CN110675484 A CN 110675484A
Authority
CN
China
Prior art keywords
compound eye
eye camera
dimensional
data
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910792361.7A
Other languages
Chinese (zh)
Inventor
王汉熙
黄鑫
蒋靳
郑晓钧
胡佳文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201910792361.7A priority Critical patent/CN110675484A/en
Publication of CN110675484A publication Critical patent/CN110675484A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention belongs to the field of three-dimensional digital scene construction, and provides a dynamic three-dimensional digital scene construction method with space-time consistency based on a compound eye camera. The data acquired by the method all meet the space-time consistency, the final construction of the three-dimensional scene with the same time section is ensured, and the dynamic three-dimensional scene acquisition can be realized during continuous shooting; the method improves the construction speed and quality of the three-dimensional model.

Description

Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera
Technical Field
The invention belongs to the field of three-dimensional digital scene construction, and particularly relates to a dynamic three-dimensional digital scene construction method with space-time consistency based on a compound eye camera.
Background
The three-dimensional scene construction can be divided into two forms, namely single-piece manufacturing oriented to function display, large-scale manufacturing oriented to engineering application and the like. In many fields, such as battlefield environment construction in the military field, emergency scene construction in the disaster rescue field, city dynamic scene construction in the security protection field, and the like, the timeliness requirement on the three-dimensional scene construction technology is very high, because of the special requirements of the scenes, large-scale manufacturing modes of real-time acquisition are required, large-scale manufacturing requires that scene original data is acquired by means of cooperation of a plurality of compound-eye cameras, if a single compound-eye camera is used for shooting, firstly, timeliness cannot meet the requirement, secondly, dynamic objects can be missed or multi-shot, original images for panoramic image splicing are acquired at different discontinuous surfaces, the actual space position of an object in the scene is different from the appearance time of the panoramic image, so that the virtual scene is deviated from the reality, and the requirement of space-time consistency cannot be met.
The time-space consistency shooting refers to that all pictures used for image splicing are shot at the same time under a unified clock, the image acquisition time is consistent with the spatial position and the posture of each object in the image at the time, and the panoramic image shows the whole-domain view of a certain time section. When a single camera continuously shoots, images at different spatial positions are shot at different time nodes, time intervals exist between the two shot images, the whole shooting process needs dozens of minutes to months and is different according to the size of a scene, due to the fact that space and time are different, the three-dimensional scene spliced by the images at different discontinuities is finally obtained, a plurality of dynamic objects cannot be shot or shot for many times, and therefore the three-dimensional virtual scene which is different from the actual scene is obtained.
Oblique photogrammetry is an emerging technology that has evolved based on traditional photogrammetry techniques. It carries on compound eye camera through unmanned aerial vehicle, has a plurality of sub-eyes on every compound eye camera, gathers the image from angle such as perpendicular, side direction and front and back simultaneously, can be relatively complete acquires the side texture information of ground object. And the existing oblique image data processing software with the cooperative parallel processing capability is combined, so that the urban three-dimensional modeling can be quickly constructed in a large range, and the production efficiency of the three-dimensional model is improved to a great extent. The oblique photography technology is widely applied to the fields of urban construction, territorial inspection, emergency disaster relief, resource development, new rural planning and the like by virtue of the advantages of multiple visual angles, high authenticity, full elements and the like.
The existing three-dimensional scene construction technology has the following defects:
1. the existing vector three-dimensional modeling technology is slow in speed and long in time consumption, long-time rendering operation is needed for achieving an ideal display effect, even some simulation effects such as water marks, fire light and smoke are additionally added, and the complexity of three-dimensional scene manufacturing is geometrically increased along with the element detail richness of an actual scene;
2. although the existing single-lens splicing technology can design a larger visual angle, the visual angle still cannot achieve global coverage;
3. in the prior art, the shooting is mostly performed by a single compound eye camera, the space-time consistency cannot be met, secondly, dynamic objects may be missed or shot more, and the actual space position of an object in a scene is different from the appearance time of a panoramic image, so that the virtual scene is deviated from the reality.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a compound-eye camera-based dynamic three-dimensional digital scene construction method with space-time consistency, so as to obtain a three-dimensional image meeting the space-time consistency shooting principle.
The main process of the invention is as follows: planning, positioning, calibrating, collecting and constructing. Firstly, an imaging range of a compound eye camera is determined, a task distribution module determines a target area shooting point according to the imaging range, then an unmanned aerial vehicle carrying the compound eye camera reaches the shooting point through a positioning module, each unmanned aerial vehicle carries out time calibration, pose calibration and occupation calibration, then synchronous shooting is carried out under the control of a unified clock, finally, oblique image data processing is carried out on a shot picture, and a three-dimensional model of the target area is quickly constructed. To implement this process, the present invention includes the following five modules: the device comprises a task allocation module, a positioning module, a data calibration module, a data acquisition module and a data processing module.
Based on the space-time consistency shooting requirement, the multi-compound-eye unmanned aerial vehicle is used for covering and shooting the ground, the multi-rotor unmanned aerial vehicle is used for carrying the multi-compound-eye cameras, the observation area of each multi-compound-eye camera is a regular hexagonal acquisition grid, a plurality of multi-rotor unmanned aerial vehicles are adopted above the target area to simultaneously acquire data in a hovering mode, and the minimum number of multi-eye cameras are used for covering the target area. The acquired data can ensure the space-time consistency.
(1) Task allocation module
And the task allocation module calculates an occupation node of each compound eye camera in the target area acquisition grid plan according to the set target area and the overlapping coverage model, determines shooting positions, postures and parameters, and transmits the data to the compound eye cameras.
The overlapping coverage model means that the visual fields of two adjacent compound eye cameras have to have a certain overlapping degree to meet the requirement of later-stage oblique image three-dimensional modeling, as shown in fig. 4, each regular hexagon represents an area that can be shot by one compound eye camera, two adjacent areas are overlapped, so that a target area is completely covered, and meanwhile, the problem that the cost of the unmanned aerial vehicle and the compound eye cameras is more, the number of the compound eye cameras is more, and the three-dimensional model is more complex is considered, and the requirement is met by adopting the fewest unmanned aerial vehicles. Therefore, the constraint condition of the overlapping coverage model in the invention is that the visual fields of any two adjacent compound eye cameras must have certain overlapping degree, the visual fields of all the compound eye cameras participating in shooting fully cover the target area, and the objective function is the minimum number of the compound eye cameras.
(2) Positioning module
The positioning module uses GPS positioning, and a GPS positioner is arranged in each compound eye camera and used for receiving GPS positioning signals and determining the position coordinates of each compound eye camera and the shooting area.
(3) Data calibration module
The data calibration module sends a time calibration command, a pose calibration command and an occupation calibration command to the compound eye camera at intervals to perform clock calibration, pose calibration and occupation calibration, so that the acquired data are ensured to have space-time consistency.
(4) Data acquisition module
The data acquisition module is used for shooting pictures from the air, and comprises a compound eye camera, a holder and an unmanned aerial vehicle, and in addition, the data acquisition module also comprises an unmanned vehicle, an unmanned ship and the like which are mainly shot from the ground, and then the side texture information and the three-dimensional structure information of the target area are obtained.
The compound eye camera can be suspended below the unmanned aerial vehicle, the visual field below the unmanned aerial vehicle is obtained, and scene data of a ground large visual field can be acquired by shooting at one time. The structure is as shown in fig. 2, 6 sub-eyes with a certain inclination angle are arranged on the circumferential direction of the compound eye camera shell structure, the 6 sub-eyes are symmetrically arranged in a regular hexagon shape, and 1 vertically shot sub-eye is arranged at the bottom of the compound eye camera shell structure to form a cluster downward view field. For a single sub-eye, the light inlet of the lens is circular, the real imaging area is also circular, but the photosensitive element (such as a CCD or a CMOS) is rectangular, the obtained image is a circular inscribed rectangle, for example, the image obtained by the sub-eye installed at the bottom of the compound eye camera is a rectangle in the middle of fig. 3; and the six surrounding sub-eyes of the compound eye camera are inclined relative to the ground, and the visual field of the compound eye camera is not rectangular any more but isosceles trapezoid. For oblique projection, the overlapping degree of the pictures taken by each sub-eye is required to be more than 60%, so that the range that a compound eye camera can take is the regular hexagon enclosed by the thick lines in fig. 3.
The cloud platform is a cable suspension device of fixed compound eye camera for keep compound eye camera stable, finely tune compound eye camera position, make compound eye camera bottom just to ground all the time, can alleviate the influence that the vibration was shot to compound eye camera simultaneously, the cloud platform is installed in the unmanned aerial vehicle below, and compound eye camera passes through the cloud platform and is connected with unmanned aerial vehicle.
(5) Data processing module
And the data processing module receives pictures shot by all compound eye cameras participating in shooting, and then constructs a real three-dimensional model of the target area based on the oblique photogrammetry technology. The specific modeling process comprises the following steps:
and step S1, carrying out pre-processing such as dodging, color evening, geometric correction and the like on the original image shot by the compound eye camera, eliminating the data congenital defect and ensuring the integrity of data and data required by modeling.
And step S2, performing aerial triangulation calculation on the multi-view images, wherein the aerial triangulation calculation comprises the steps of relative orientation, control point measurement, absolute orientation, block adjustment and the like, and obtaining high-precision image external orientation elements and images after distortion correction to prepare for later model creation and texture extraction.
And step S3, obtaining high-density three-dimensional point cloud of the surface building by adopting a multi-view image dense matching technology, constructing a triangular network (TIN) model, and generating a three-dimensional model with a white membrane.
And step S4, registering the texture image with the accurate coordinate information with the three-dimensional TIN model, thereby realizing automatic texture mapping and finally generating the city live-action three-dimensional model.
Compared with the prior art, the invention has the following advantages:
1. the acquired data meet the space-time consistency, the final construction of the three-dimensional scene with the same time section is ensured, and the dynamic three-dimensional scene acquisition can be realized during continuous shooting;
2. the compound eye camera is adopted for shooting, so that shooting can be carried out in multiple directions simultaneously, and the shooting efficiency is improved;
3. the invention establishes the overlapping coverage model, uses less compound eye cameras to shoot on the premise of meeting the requirements, reduces the cost, reduces the data redundancy and lightens the burden of the three-dimensional scene construction;
4. the oblique photogrammetry technology used by the invention enables people to obtain multi-angle images, has complete geographic information, provides rich real texture information for three-dimensional modeling, reduces the cost of the three-dimensional modeling, and improves the speed and quality of the three-dimensional modeling.
Drawings
FIG. 1 is a flow chart of a dynamic three-dimensional digital scene construction method of the present invention.
Fig. 2 is a schematic view of a compound eye camera according to the present invention.
FIG. 3 is a schematic view of a compound eye camera according to the present invention.
Fig. 4 is a schematic diagram of relative position relationships of multiple compound-eye cameras.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the embodiment provides a method for constructing a dynamic three-dimensional digital scene with space-time consistency based on a compound eye camera, and the specific flow is as follows:
step 1: determining an imaging range of a compound eye camera
Compound eye camera hangs in the unmanned aerial vehicle below, acquires the field of vision of unmanned aerial vehicle below, once shoots the scene data that can gather the big visual field in ground, and its structure is as shown in fig. 2, installs 6 sub-eyes of certain inclination in compound eye camera shell structure circumference, and 6 sub-eyes are regular hexagon symmetry installation, installs 1 sub-eyes of shooing perpendicularly in the bottom, constitutes the crowd field of vision of looking down. For a single sub-eye, the light inlet of the lens is circular, the real imaging area is also circular, but the photosensitive element (such as a CCD or a CMOS) is rectangular, the obtained image is a circular inscribed rectangle, for example, the image obtained by the sub-eye installed at the bottom of the compound eye camera is a rectangle in the middle of fig. 3; and the six surrounding sub-eyes of the compound eye camera are inclined relative to the ground, and the visual field of the compound eye camera is not rectangular any more but isosceles trapezoid. For oblique projection, the overlapping degree of the pictures taken by each sub-eye is required to be more than 60%, so that the range that a compound eye camera can take is the regular hexagon enclosed by the thick lines in fig. 3.
Step 2: determining the coordinates of the shooting point according to the target area
In order to meet the shooting requirement of space-time consistency, the invention uses a plurality of compound eye cameras to shoot the ground in a covering manner, uses a multi-rotor unmanned aerial vehicle to carry the compound eye cameras, the observation area of each compound eye camera is a regular hexagonal acquisition grid, adopts a plurality of multi-rotor unmanned aerial vehicles above the target area to simultaneously acquire data in a hovering manner, and uses the least compound eye cameras to cover the target area. The acquired data can ensure the space-time consistency.
And the task allocation module calculates an occupation node of each compound eye camera in the target area acquisition grid plan according to the set target area and the overlapping coverage model, determines shooting positions, postures and parameters, and transmits the data to the compound eye cameras.
The overlapping coverage model means that the visual fields of two adjacent compound eye cameras i and k have certain overlapping degree to meet the requirement of three-dimensional modeling of later-period oblique images, as shown in fig. 4, each regular hexagon represents an area which can be shot by one compound eye camera, and two adjacent areas are overlapped to completely cover a target area.
Therefore, the constraint condition of the overlapping coverage model in the invention is that the visual fields of any two adjacent compound eye cameras must have certain overlapping degree, the visual fields of all the compound eye cameras participating in shooting fully cover the target area, and the objective function is the minimum number of the compound eye cameras. And then solving by intelligent optimization algorithms such as simulated annealing, genetic algorithm, ant colony algorithm and the like and by using mathematic tools such as matlab and the like to obtain the coordinates of each compound eye camera.
Step 3: the compound eye camera reaches the designated position
After the shooting coordinates of each compound eye camera are determined, the compound eye cameras are connected with the unmanned aerial vehicle through the cloud deck and reach the designated positions with the assistance of the positioning module.
The cradle head is a hanging device for fixing the compound eye camera, and is used for keeping the compound eye camera stable and finely adjusting the position of the compound eye camera, so that the bottom of the compound eye camera always faces the ground, and meanwhile, the influence of vibration on shooting of the compound eye camera can be reduced.
The positioning module uses GPS positioning, and a GPS positioner is arranged in each compound eye camera and used for receiving GPS positioning signals and guiding each compound eye camera to accurately reach the position of a shooting point.
And the data calibration module sends a time calibration command, a pose calibration command and an occupation calibration command to the compound eye camera at intervals to perform clock calibration, pose calibration and occupation calibration.
Step 4: all compound eye cameras shoot simultaneously
After receiving a shooting command of the upper computer, each compound eye camera shoots under the control of the unified clock, so that the time-space consistency of shooting is ensured, and shot picture data, and position and posture information of the compound eye cameras are transmitted back to the upper computer.
Step 5: oblique image data processing and three-dimensional model construction
After receiving pictures taken by each compound eye camera participating in the taking, a true three-dimensional model of the target area is constructed by utilizing oblique photogrammetry technology. The specific modeling process comprises the following steps:
(5-1) carrying out pretreatment such as dodging, color evening, geometric correction and the like on an original image shot by the compound eye camera, eliminating data congenital defects and ensuring the integrity of data and data required by modeling;
(5-2) performing aerial triangulation calculation on the multi-view image, wherein the aerial triangulation calculation comprises the steps of relative orientation, control point measurement, absolute orientation, block adjustment and the like, and obtaining high-precision image external orientation elements and an image subjected to distortion correction to prepare for later model creation and texture extraction;
(5-3) obtaining high-density three-dimensional point cloud of the surface building by adopting a multi-view image dense matching technology, constructing a triangulation network (TIN) model, and generating a three-dimensional model with a white membrane;
and (5-4) registering the texture image with the accurate coordinate information with the three-dimensional TIN model, so as to realize automatic texture mapping and finally generate the city live-action three-dimensional model.
Details not described in the present specification belong to the prior art known to those skilled in the art.
It will be understood by those skilled in the art that the foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the invention, such that any modification, equivalent replacement or improvement made within the spirit and principle of the present invention shall be included within the scope of the present invention.

Claims (4)

1. A method for constructing a dynamic three-dimensional digital scene with space-time consistency based on compound eye cameras is characterized in that the method uses a plurality of compound eye cameras to carry out covering shooting on the ground, a multi-rotor unmanned aerial vehicle is used for carrying the compound eye cameras, the observation area of each compound eye camera is a regular hexagonal acquisition grid, a plurality of multi-rotor unmanned aerial vehicles are adopted above a target area to simultaneously acquire data in a hovering mode, and the least compound eye cameras are used for covering the target area; the method comprises the following steps:
(1) the task allocation module calculates an occupation node of each compound eye camera in a target area acquisition grid plan according to a set target area and an overlapping coverage model, determines shooting positions, postures and parameters, and transmits the data to the compound eye cameras;
(2) the positioning module uses GPS for positioning, and a GPS positioner is arranged in each compound eye camera and used for receiving GPS positioning signals and determining the position coordinates of each compound eye camera and a shooting area;
(3) the data calibration module sends a time calibration command, a pose calibration command and an occupation calibration command to the compound eye camera at intervals, and performs clock calibration, pose calibration and occupation calibration to ensure that the acquired data has time-space consistency;
(4) the data acquisition module shoots pictures from the air and comprises a compound eye camera, a holder and an unmanned aerial vehicle, and the data acquisition module also shoots from the ground by using an unmanned vehicle and an unmanned ship so as to acquire side texture information and three-dimensional structure information of a target area;
(5) and the data processing module receives pictures shot by all compound eye cameras participating in shooting, and then constructs a real three-dimensional model of the target area based on the oblique photogrammetry technology.
2. The method for constructing dynamic three-dimensional digital scene with space-time consistency based on compound eye camera as claimed in claim 1, which is characterized in that: the compound eye camera in the step (4) can be hung below the unmanned aerial vehicle to obtain the visual field below the unmanned aerial vehicle, scene data of a ground large visual field can be acquired by one-time shooting, 6 sub-eyes with a certain inclination angle are installed on the circumferential direction of a compound eye camera shell structure, the 6 sub-eyes are symmetrically installed in a regular hexagon shape, 1 vertically shot sub-eye is installed at the bottom of the compound eye camera shell structure to form a cluster downward-looking visual field, for a single sub-eye, a light inlet of a lens is circular, a real imaging area is also circular, but a photosensitive element is rectangular, and an obtained image is a circular inscribed rectangle; six sub-eyes around the compound eye camera are inclined relative to the ground, the visual field of the compound eye camera is isosceles trapezoid, and for inclined projection, the overlapping degree of pictures shot by each sub-eye is required to be more than 60%.
3. The method for constructing dynamic three-dimensional digital scene with space-time consistency based on compound eye camera as claimed in claim 1, which is characterized in that: the cradle head in the step (4) is a hanging device for fixing the compound eye camera, and is used for keeping the compound eye camera stable and finely adjusting the position of the compound eye camera, so that the bottom of the compound eye camera is always opposite to the ground, meanwhile, the influence of vibration on shooting of the compound eye camera can be reduced, the cradle head is installed below the unmanned aerial vehicle, and the compound eye camera is connected with the unmanned aerial vehicle through the cradle head.
4. The method for constructing dynamic three-dimensional digital scene with space-time consistency based on compound eye camera as claimed in claim 1, which is characterized in that: the specific process of the step (5) of building the real three-dimensional model of the target area based on the oblique photogrammetry technology comprises the following steps:
step S1, carrying out dodging, color evening and geometric correction preprocessing on the original image shot by the compound eye camera, eliminating the data congenital defect and ensuring the integrity of data and data required by modeling;
step S2, performing aerial triangulation calculation on the multi-view images, wherein the aerial triangulation calculation comprises the steps of relative orientation, control point measurement, absolute orientation and block adjustment, and high-precision image external orientation elements and images subjected to distortion correction are obtained to prepare for later model creation and texture extraction;
step S3, obtaining high-density three-dimensional point cloud of the surface building by adopting a multi-view image dense matching technology, constructing a triangulation network model, and generating a three-dimensional model with a white membrane;
and step S4, registering the texture image with the accurate coordinate information with the three-dimensional TIN model, thereby realizing automatic texture mapping and finally generating the city live-action three-dimensional model.
CN201910792361.7A 2019-08-26 2019-08-26 Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera Pending CN110675484A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910792361.7A CN110675484A (en) 2019-08-26 2019-08-26 Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910792361.7A CN110675484A (en) 2019-08-26 2019-08-26 Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera

Publications (1)

Publication Number Publication Date
CN110675484A true CN110675484A (en) 2020-01-10

Family

ID=69075582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910792361.7A Pending CN110675484A (en) 2019-08-26 2019-08-26 Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera

Country Status (1)

Country Link
CN (1) CN110675484A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111649761A (en) * 2020-06-01 2020-09-11 成都纵横大鹏无人机科技有限公司 Method, device, equipment and medium for acquiring POS data of multiple cameras
CN113141493A (en) * 2021-04-28 2021-07-20 合肥工业大学 Overlapped compound eye
CN114559983A (en) * 2020-11-27 2022-05-31 南京拓控信息科技股份有限公司 Omnibearing dynamic three-dimensional image detection device for subway train body
CN117392328A (en) * 2023-12-07 2024-01-12 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118585A (en) * 2018-08-01 2019-01-01 武汉理工大学 A kind of virtual compound eye camera system and its working method of the building three-dimensional scenic acquisition meeting space-time consistency

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118585A (en) * 2018-08-01 2019-01-01 武汉理工大学 A kind of virtual compound eye camera system and its working method of the building three-dimensional scenic acquisition meeting space-time consistency

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周杰: "《倾斜摄影测量在实景三维建模中的关键技术研究》", 《中国优秀硕士学位论文全文数据库基础科学辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111649761A (en) * 2020-06-01 2020-09-11 成都纵横大鹏无人机科技有限公司 Method, device, equipment and medium for acquiring POS data of multiple cameras
CN111649761B (en) * 2020-06-01 2022-05-06 成都纵横大鹏无人机科技有限公司 Method, device, equipment and medium for acquiring POS data of multiple cameras
CN114559983A (en) * 2020-11-27 2022-05-31 南京拓控信息科技股份有限公司 Omnibearing dynamic three-dimensional image detection device for subway train body
CN113141493A (en) * 2021-04-28 2021-07-20 合肥工业大学 Overlapped compound eye
CN117392328A (en) * 2023-12-07 2024-01-12 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster
CN117392328B (en) * 2023-12-07 2024-02-23 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster

Similar Documents

Publication Publication Date Title
CN110675484A (en) Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera
US11070725B2 (en) Image processing method, and unmanned aerial vehicle and system
CN110310248B (en) A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system
US20210004973A1 (en) Image processing method, apparatus, and storage medium
TWI555378B (en) An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN107492069B (en) Image fusion method based on multi-lens sensor
CN110136259A (en) A kind of dimensional Modeling Technology based on oblique photograph auxiliary BIM and GIS
CN111192362B (en) Working method of virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene
US20170293216A1 (en) Aerial panoramic oblique photography apparatus
CN108876926A (en) Navigation methods and systems, AR/VR client device in a kind of panoramic scene
CN109118585B (en) Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
CN110084785B (en) Power transmission line vertical arc measuring method and system based on aerial images
CN110428501B (en) Panoramic image generation method and device, electronic equipment and readable storage medium
TW201717613A (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN103607584A (en) Real-time registration method for depth maps shot by kinect and video shot by color camera
CN107038714B (en) Multi-type visual sensing cooperative target tracking method
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
CN105427372A (en) TIN-based orthoimage splicing color consistency processing technology
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN116210013A (en) BIM (building information modeling) visualization system and device, visualization platform and storage medium
CN113031462A (en) Port machine inspection route planning system and method for unmanned aerial vehicle
Zhou et al. Application of UAV oblique photography in real scene 3d modeling
CN114882201A (en) Real-time panoramic three-dimensional digital construction site map supervision system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110

RJ01 Rejection of invention patent application after publication