CN109118585A - A kind of virtual compound eye camera system and its working method of the building three-dimensional scenic acquisition meeting space-time consistency - Google Patents
A kind of virtual compound eye camera system and its working method of the building three-dimensional scenic acquisition meeting space-time consistency Download PDFInfo
- Publication number
- CN109118585A CN109118585A CN201810865682.0A CN201810865682A CN109118585A CN 109118585 A CN109118585 A CN 109118585A CN 201810865682 A CN201810865682 A CN 201810865682A CN 109118585 A CN109118585 A CN 109118585A
- Authority
- CN
- China
- Prior art keywords
- compound eye
- camera
- eye camera
- building
- shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 150000001875 compounds Chemical class 0.000 claims abstract description 161
- 238000001514 detection method Methods 0.000 claims description 13
- 230000014759 maintenance of location Effects 0.000 claims description 12
- 108090000623 proteins and genes Proteins 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 7
- 230000002068 genetic effect Effects 0.000 claims description 7
- 239000000203 mixture Substances 0.000 claims description 7
- 230000004807 localization Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 4
- 230000033228 biological regulation Effects 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000009434 installation Methods 0.000 claims description 3
- 230000006641 stabilisation Effects 0.000 claims description 3
- 238000011105 stabilization Methods 0.000 claims description 3
- 230000006855 networking Effects 0.000 claims description 2
- 238000012394 real-time manufacturing Methods 0.000 abstract description 2
- 238000002955 isolation Methods 0.000 description 2
- 230000000116 mitigating effect Effects 0.000 description 2
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 1
- 241000132456 Haplocarpha Species 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/12—Computing arrangements based on biological models using genetic models
- G06N3/126—Evolutionary algorithms, e.g. genetic algorithms or genetic programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
- G06Q10/06312—Adjustment or analysis of established resource schedule, e.g. resource or task levelling, or dynamic rescheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/023—Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
- H04W64/003—Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
Abstract
The present invention relates to 3-dimensional digital scenario building fields, the virtual compound eye camera system and its working method of a kind of building three-dimensional scenic acquisition meeting space-time consistency are provided, the system includes data acquisition module, locating module and task allocating module, the data acquisition module is made of all compound eye cameras collaboration towards building target body, by all compound eye cameras of object-oriented body according to set building acquisition grid planning, virtual group collection is complete and architecture a compound eye system, referred to as virtual compound eye;Virtual compound eye is by multiple compound eye cameras according to the acquisition grid planning of set building, mutually collaboration, common establishment.The virtual compound eye camera system of the present invention and its working method can be realized the consistent captured in real-time of space-time, real-time manufacturing, obtain more accurate true dynamic 3 D virtual scene.
Description
Technical field
The present invention relates to 3-dimensional digital scenario building field, specifically a kind of building three for meeting space-time consistency
Tie up the virtual compound eye camera system and its working method of scene (inside and outside scene) acquisition.
Background technique
Oblique projection shooting carries out later period image joint, obtains meeting human eye vision by the image of acquisition different angle
Three-dimensional virtual scene.Currently, for three-dimensional scenic oblique projection shooting camera system, lens construction mainly have five camera lenses,
Two camera lenses and the forms such as single-lens.Wherein, five camera lenses and two lens cameras, the space angle between camera lens immobilizes, by list
Axis holder establishing shot scene;One-shot camera drives alignment lens photographing objective by two axis or three axis holders.
In acquisition applications, a single-lens or multi-lens digital camera generally is carried using carriers such as unmanned planes, is surrounded
Entity scene carries out multi-angle and looks around shooting.Multi-lens camera generally presses the flight path shooting, collecting of planning, and single-lens phase
Machine is acquired for hand-guided flight shooting, collecting or ground hand-held.
For Building class acquisition target, existing acquisition scheme is that single mirror is carried by carrier (unmanned plane, all-terrain vehicle, etc.)
Head camera or multiple lens camera target volume make continuity cruise movement, and camera is continuously clapped in cruise moving process
It takes the photograph.
It is found by practice, existing acquisition scheme has the following problems:
1. the data of acquisition do not have space-time consistency, constructed three-dimensional scenic lacks the dynamic credible based on space-time consistency
Property.Space-time consistency shooting refers to that initial data is the synchronization shooting under unified clock, the time of image capturing and shadow
Each object is consistent in the spatial position at that moment and posture as in.Do not have space-time one when single camera is continuously shot
Cause property, the image of different spatial is shot in different time nodes, there are time interval between the image shot twice,
Need dozens of minutes to some months etc. according to the entire shooting process of the size of scene, what is finally obtained is different time section shadow
As the three-dimensional scenic of splicing, many dynamic things will not be photographed or repeatedly be photographed, to obtain with actual scene not
The three-dimensional virtual scene of symbol.
2. being imitated at present for based on scene other than Building class scene, being constructed without the true three-dimension of inside and outside scene one
Fruit.
3. the shooting point of pair camera is not made rational planning for, the process of camera shooting relies primarily on artificial flight control, or
Person carries out simple flight path planning according to flight area by Flight Control Software, leads to data bulk redundancy or part number
According to missing.
4. picker's is subjective in Collecting operation, it is difficult to establish scientific, regulation and standardization and quantification is adopted
Collection scheme, it is difficult to keep the stabilization of acquisition quality, it is difficult to improve post-production efficiency.
Summary of the invention
The purpose of the present invention is to overcome the above deficiencies in the prior art, provides that a kind of to meet space-time consistent
Property the acquisition of building three-dimensional scenic virtual compound eye camera system and its working method, realize the consistent captured in real-time of space-time,
Real-time manufacturing obtains more accurate true dynamic 3 D virtual scene.
The purpose of the present invention is what is be achieved by the following technical measures.
A kind of virtual compound eye camera system for the building three-dimensional scenic acquisition meeting space-time consistency, including data acquisition
Module, locating module and task allocating module,
The data acquisition module is used to acquire the picture of specific position and special angle, and collected data are passed through wirelessly
Mode is passed back in real time, and for reconstructing threedimensional model, data acquisition module is assisted by all compound eye cameras towards building target body
With composition, by all compound eye cameras of object-oriented body according to set building acquisition grid planning, virtual group collection is one
Complete and architecture compound eye system, referred to as virtual compound eye;Virtual compound eye is adopted by multiple compound eye cameras according to set building
Collect grid planning, mutually cooperate with, set up jointly, each compound eye camera possesses a plurality of lenses, and an individual camera lens is referred to as son
Eye;All camera lenses implement data acquisition according to unified clock, to obtain the data with space-time consistency;
The locating module includes two parts of GPS locator and UWB positioning system;
The GPS locator is mounted in each compound eye camera, for receiving GPS positioning signal, determines compound eye camera and shooting area
The global coordinates in domain;
The UWB positioning system for building acquisition grid in a certain compound eye camera accurate positioning (place floor, orientation,
Fang Wei, and the specific point in Fang Weizhong), satellite-signal indoors can deep fades, UWB positioning system set up region in can
Up to a centimetre class precision, and three-dimensional localization may be implemented;UWB positioning system includes base station and label, and base station is mounted on photographic subjects
Around, label is mounted in each compound eye camera;The UWB location system tag emits pulse, each label according to certain frequency
There is unique ID;The UWB positioning system base station is used to receive the UWB pulse of label transmitting, measures the distance of label and base station;
When there are multiple base stations, the position of positioning label is accurately calculated by algorithm;
The task allocating module is reconstructed the big of building entity according to determined by set building acquisition grid planning
Small, shape, the precision for shooting the limitation in space, reconstruction model calculate each compound eye camera in set building acquisition grid rule
Occupy-place node, determines camera site, posture and parameter in drawing, and determines that each compound eye camera executes the sub- eye of acquisition tasks, and will
These data are transferred to compound eye camera;The module is calibrated to compound eye camera sending time calibration command, pose at regular intervals
Order and occupy-place calibration command carry out clock alignment, pose calibration and occupy-place calibration.
In the above-mentioned technical solutions, the compound eye camera is that one kind possesses a plurality of lenses, can along the horizontal plane 360 ° and vertical
The equipment that 360 ° of face acquires image simultaneously, for acquiring image data;The shooting order of host computer is received, and by the picture of shooting
Data, the position of compound eye camera and posture information pass host computer back;Each compound eye camera acquires grid planning according to set building
Occupy-place networking, the synthetic operation under unified clock control, receives unified allocation of resources.
In the above-mentioned technical solutions, the task allocating module calculates the camera head angle for needing to adjust, boat angle and water
The straight angle;Camera head is adjusted, compound eye camera is made to keep shooting posture;Adjust each sub- eye acquisition parameters inside compound eye camera, control
Compound eye camera processed is taken pictures.Compound eye camera holder is a kind of support/hanging/act support/side shifter of fixed compound eye camera, for protecting
Compound eye camera stabilization, fine tuning compound eye camera position are held, prevents/isolation/from mitigating vibration;There are stepper motor and connecting rod in cradle head structure
Bracket enables holder to rotate in the horizontal and vertical directions, or does local sidesway, thus further careful adjustment compound eye camera
Shooting angle;Holder is mounted on carrier (unmanned plane, dirigible, all-terrain vehicle, tripod, hanger, etc.), and compound eye camera is solid
It is scheduled on holder.
The present invention also provides a kind of virtual compound eye phases of above-mentioned building three-dimensional scenic acquisition for meeting space-time consistency
The working method of machine system, method includes the following steps:
(1) GPS locator is installed in compound eye camera, determines the global coordinates of construction zone;Set up UWB positioning system, choosing
Take suitable position installation UWB positioning system base station;Three-dimensional localization needs at least six base station;It is installed in each compound eye camera
One UWB label;
(2) pre- shooting, outdoor to fly shooting around building target body ring by unmanned plane or dirigible carrying compound eye camera, interior passes through
Tripod or hand-held holder carry compound eye camera and record interior scene, control compound eye camera record limitation space boundary point, and return
Primary data is to task allocating module, and pre- photographed data is for constructing building acquisition grid planning;
(3) pre- photographed data is pre-processed by task allocating module, by binocular vision technology, calculates building side
The three-dimensional coordinate of boundary's point handles workflow with corner come the abstract size, shape and inner frame for being reconstructed building entity
It is as follows
(3-1) carries out preliminary image procossing (mainly edge sharpening) to the picture shot in advance;
(3-2) extracts the Origin And Destination of the edge line segment in picture;
The edge line segment Origin And Destination pair that (3-3) selective extraction comes out;
(3-4) carries out the matching of the starting point of edge line segment in adjacent picture, finds out of the Origin And Destination of edge line segment
With region;
(3-5) calculates the three-dimensional coordinate of edge line segment Origin And Destination by binocular vision algorithm, constitutes virtual three-dimensional space
In line segment;
(3-6) removes the line segment not being connected;
(3-7) selects two or more connected line segments, it is made to connect into plane;It is added if the line segment of selection can not be closed new
Line segment, make its plane be closed;
(3-8) repeats step (3-7), obtains the pre- reconstruction of objects being made of plane and plane intersecting lens;
(4) according to the entity of pretreatment reconstruct, building is constructed by task allocating module and acquires grid, construction method is as follows
(4-1) finds out all planes and facade in pre- reconstruct entity, including building outfield scape (exterior wall, roof, ground etc.)
With interior scene (corridor, corridor, elevator, room, underground parking, various electricity-water-gas piping lanes etc.);
(4-2) according to building scene acquisition resolution demand, from plane near subaerial side, according to camera parameter according to
The secondary each perspective plane of determination makes the uniform overlay planes in the perspective plane of compound eye camera, guarantees to be greater than 50% overlapping between perspective plane
Rate;
The perspective plane of (4-3) every height eye is that a rectangle acquires grid, and the inside and outside scene all areas of whole building are all adopted
Collect grid covering, composition building acquires grid system, the corresponding sub- eye shooting area of each grid, real-time update grid regions
The content in domain;
(5) mission planning and optimization are carried out by task allocating module, processing workflow is as follows
All intersections that (5-1) finds out pre- reconstruct entity make the projected centre point of compound eye camera to set out near subaerial point
On intersection, while guaranteeing the Duplication between perspective plane greater than 50%;
The pose that (5-2) passes through the projection centre reverse camera of compound eye camera;
(5-3) extra shooting pose is eliminated: the camera pose found out in step (5-2) will appear following two problem: situation 1,
Two or more cameras, camera site are close;Situation 2, two or more cameras, camera site and shooting angle
It spends all close;The spatial position of adjacent cameras closely will affect the precision of three-dimensional scenic reconstruct excessively, and it is superfluous to allow camera quantity to generate
Remaining, both of these case belongs to extra pose, and the method for eliminating extra pose is as follows: situation 1: camera being abstracted as a point, is clapped
It acts as regent and sets similar point as point set, retention point is concentrated with a distance from other points and the smallest point, leaves out other points, with retention point
Camera site of the camera site as camera needs the projection put according to other since the shooting angle of original camera is not close
Shooting angle is recalculated at center;Situation 2: using shooting point similar in compound eye camera position and shooting angle as a point set,
Retention point is concentrated with a distance from other points and the smallest point, leaves out other points, using the camera site of retention point and shooting angle as
The camera site of compound eye camera and shooting angle do not need to calculate shooting angle again since shooting angle is close;
The distribution of (5-4) task, according to the quantity of compound eye camera, by genetic algorithm and Dijkstra's algorithm, guarantee it is mobile away from
Moving distance from longest compound eye camera is optimal, is that every compound eye camera distributes task;
(6) according to task allocation result, compound eye camera is carried using unmanned plane/dirigible in the air and is hovered to designated position, ground makes
Compound eye camera, which to be carried, with all-terrain vehicle is fixed on designated position, compound eye camera is carried to designated position using tripod in interior, into
The calibration of row compound eye camera spatial pose, geographical location calibration, unified clock calibration;
(7) implement shooting, each compound eye camera is shot simultaneously according to unified clock, and data meet space-time consistency at this time, passback
Photographed data, the 3-dimensional digital scene under synchronization is reconstructed by computer system automatically, and virtual compound eye presses dynamic scene frame per second
It is required that taking the photograph every (1/ frame per second) second beats, such as frame per second requires to be 25fps, then shooting interval is set as 1/25 second, realizes brush in real time
New dynamic shooting.
In the above-mentioned technical solutions, have described in step (5-2) by the pose of the projection centre reverse camera of camera
The step of body, is as follows: it is the parallel lines for the normal direction that starting point does projected centre point place reconstruction plane by projected centre point,
It is direction far from object with starting point, is the center of compound eye camera along the point that parallel lines distance is the object distance defaulted when shooting;
Detect whether compound eye camera does not collide with other objects, while whether in limitation space, if YES, then it is assumed that compound eye camera
Camera site be the point, shooting angle be parallel lines and 3 coordinate planes angle;Otherwise, then preferentially make compound eye camera
Center is moved along parallel lines direction, is detected again, the central point that detection passes through is found out, if do not looked for along center
It arrives, then moves up and down center in a certain range, it can be by the central point of detection until finding;Compound eye camera at this time
Position is the center position eventually by detection, and shooting angle is by the central point and projection centre eventually by detection
The angle of straight line and coordinate surface.
In the above-mentioned technical solutions, the modeling pattern of genetic algorithm described in step (5-4) is: with the number of compound eye camera
Amount is the item number of gene, is indicated so that whether each compound eye camera includes camera site point, whether gene expresses, with each gene
Compound eye camera longest moving distance is to judge the foundation of genome quality in group, and the distance the short then better;Wherein obtain every base
Moving distance when because of expression carries out the optimization in path by Dijkstra's algorithm, calculates through the paths of all the points
Distance is most short.
The present invention proposes a kind of concept of virtual compound eye, has the advantage that compared with prior art
One, data acquisition has space-time consistency, and dynamic may be implemented in the three-dimensional scenic of discontinuity surface when guarantee reconstruct is same
Three-dimensional scenic acquisition.
Two, provide entity is reconfigured quickly mode, and the general knot of reconstruct entity is quickly obtained by simple pre- shooting
Structure removes extra measurement from.
Three, it proposes that building acquires Meshing Method, distributes compound eye camera shooting point, advance planning according to acquisition grid
Compound eye camera pose reduces data redundancy.
Four, the limitation function in shooting space is provided, guarantees that compound eye camera will not be touched with the object in environment in shooting process
It hits, guarantees safety.
Five, sync pulse jamming time optimal is guaranteed by genetic algorithm and Dijkstra's algorithm.
Six, propose that the scientific shooting of the large-scale dynamic 3 D number scene with space-time consistency based on virtual compound eye is adopted
Collection scheme guarantees that three-dimensional scenic refreshes in real time.
Detailed description of the invention
Fig. 1 is the flow chart of building three-dimensional scene construction method of the present invention.
Fig. 2 is reconstruct physical size and shape pretreatment process figure in the present invention.
Fig. 3 is that building of the present invention acquires grid building flow chart.
Fig. 4 is mission planning of the present invention and optimized flow chart.
Specific embodiment
Below in conjunction with attached drawing, the technical solution in the present invention is clearly and completely described.
The virtual compound eye camera system for the building three-dimensional scenic acquisition for meeting space-time consistency the present embodiment provides a kind of,
Including data acquisition module, locating module and task allocating module.
The data acquisition module is used to acquire the picture of specific position and special angle, and collected data are passed through
Wireless mode is passed back in real time, for reconstructing threedimensional model;Data acquisition module is by all compound eye phases towards building target body
Machine collaboration composition, by all compound eye cameras of object-oriented body according to the acquisition grid planning of set building, virtual group collection is one
A complete and architecture compound eye system, referred to as virtual compound eye;Virtual compound eye is by multiple compound eye cameras according to set building
Grid planning is acquired, mutually cooperates with, set up jointly, each compound eye camera possesses a plurality of lenses, and an independent camera lens is referred to as son
Eye;All camera lenses implement data acquisition according to the regulation of unified clock, obtain the data with space-time consistency;Compound eye camera
Carrier can be unmanned plane, dirigible, all-terrain vehicle, tripod, hanger, etc..Wherein, it is more to be that one kind possesses for single compound eye camera
A camera lens, can 360 ° and 360 ° of vertical plane acquire image simultaneously along the horizontal plane equipment, can expire for a compound eye camera
Sufficient local space time's consistency shooting.Single compound eye camera can receive the shooting order of host computer simultaneously, and by the picture number of shooting
Host computer is passed back according to, the position of compound eye camera and posture information;Each compound eye camera acquires grid planning according to set building and accounts for
Hyte net, the synthetic operation under unified clock control, receives unified allocation of resources.
The locating module includes two parts of GPS locator and UWB positioning system;
The GPS locator is mounted in each compound eye camera, for receiving GPS positioning signal, determines each compound eye camera and shooting
The global coordinates in region;
The UWB positioning system for building acquisition grid in a certain compound eye camera accurate positioning (place floor, orientation,
Fang Wei, and the specific point in Fang Weizhong), satellite-signal indoors can deep fades, UWB positioning system set up region in can
Up to a centimetre class precision, and three-dimensional localization may be implemented;UWB positioning system includes base station and label, and base station is mounted on photographic subjects
Around, label is mounted in each compound eye camera;The UWB location system tag emits pulse, each label according to certain frequency
There is unique ID;The UWB positioning system base station is used to receive the UWB pulse of label transmitting, measures the distance of label and base station;
When there are multiple base stations, the position of positioning label is accurately calculated by algorithm;
The task allocating module is reconstructed the big of building entity according to determined by set building acquisition grid planning
Small, shape, the precision for shooting the limitation in space, reconstruction model calculate each compound eye camera in set building and acquire grid
Occupy-place node in planning determines camera site, posture and parameter, determines that each compound eye camera executes the sub- eye of acquisition tasks, and
These data are transferred to compound eye camera, the module is at regular intervals to compound eye camera sending time calibration command, pose school
Quasi- order and occupy-place calibration command carry out clock alignment, pose calibration and occupy-place calibration.
Software realization of the task allocating module by operation on computers.The module mainly has following functions: mentioning
For virtual three-dimensional space, reconstruct physical size and shape pretreatment, the building of building acquisition grid, mission planning and optimization.
1. providing virtual three-dimensional space
A virtual three dimensional coordinate space is provided, for placing the point cloud data of object, for providing compound eye carrier movement
Space, at the same provide the collision detection of object, comprising detecting, the functions such as the projection of putting arbitrary face.
2. reconstructing physical size and shape pre-processing
By first time, the data (i.e. pre- photographed data) of acquisition are pre-processed, and by binocular vision technology, calculate corner points
Three-dimensional coordinate, with corner come the abstract size and shape for being reconstructed entity.
3. building acquires grid building
Camera perspective plane is determined according to building scene acquisition resolution demand and camera parameter according to the entity of pretreatment reconstruct
Size and location, the perspective plane of every height eye are that a rectangle acquires grid, the inside and outside scene all areas of whole building all by
Grid covering is acquired, composition building acquires grid system.
4. mission planning and optimization
According to grid program results, the limitation of the precision of model and spatial position after reconstruct calculates the bat of each compound eye camera
It acts as regent and sets, the data such as camera posture.
In addition to above-mentioned function, the task allocating module calculates the camera head angle for needing to adjust, boat angle and level
Angle;Camera head is adjusted, compound eye camera is made to keep shooting posture;Adjust each sub- eye acquisition parameters inside compound eye camera, control
Compound eye camera is taken pictures.Compound eye camera holder is a kind of support/hanging/act support/side shifter of fixed compound eye camera, for keeping
Compound eye camera is stable, finely tunes compound eye camera position, prevents/isolation/from mitigating vibration;There are stepper motor and connecting rod branch in cradle head structure
Frame enables holder to rotate in the horizontal and vertical directions, or does local sidesway, thus further careful adjustment compound eye camera
Shooting angle;Holder is mounted on carrier (unmanned plane, dirigible, all-terrain vehicle, tripod, hanger, etc.), and compound eye camera is fixed
On holder.
In order to which the quality for guaranteeing shooting is relatively good during shooting, make to shoot object and camera plane phase as far as possible
It is overlapped, while should be remote as far as possible in order to improve the spatial position of the precision adjacent cameras of reconstruct.
The present embodiment also provides a kind of virtual compound eye of above-mentioned building three-dimensional scenic acquisition for meeting space-time consistency
The working method of camera system, as shown in Figure 1, method includes the following steps:
(1) GPS locator is installed in compound eye camera, determines the global coordinates of construction zone;Set up UWB positioning system, choosing
Take suitable position installation UWB positioning system base station;Three-dimensional localization needs at least six base station;It is installed in each compound eye camera
One UWB label;
(2) pre- shooting, outdoor to fly shooting around building target body ring by unmanned plane or dirigible carrying compound eye camera, interior passes through
Tripod or hand-held holder carry compound eye camera and record interior scene, control compound eye camera record limitation space boundary point, and return
Primary data is to task allocating module, and pre- photographed data is for constructing building acquisition grid planning;
(3) pre- photographed data is pre-processed by task allocating module, by binocular vision technology, calculates corner points
Three-dimensional coordinate, with corner come the abstract size, shape and inner frame for being reconstructed entity, as shown in Fig. 2, processing workflow is such as
Under
(3-1) carries out preliminary image procossing (mainly edge sharpening) to the picture shot in advance;
(3-2) extracts the Origin And Destination of the edge line segment in picture;
The edge line segment Origin And Destination pair that (3-3) selective extraction comes out;
(3-4) carries out the matching of the starting point of edge line segment in adjacent picture, finds out of the Origin And Destination of edge line segment
With region;
(3-5) calculates the three-dimensional coordinate of edge line segment Origin And Destination by binocular vision algorithm, constitutes virtual three-dimensional space
In line segment;
(3-6) removes the line segment not being connected;
(3-7) selects two or more connected line segments, it is made to connect into plane;It is added if the line segment of selection can not be closed new
Line segment, make its plane be closed;
(3-8) repeats step (3-7), obtains the pre- reconstruction of objects being made of plane and plane intersecting lens;
(4) according to the entity of pretreatment reconstruct, building is constructed by task allocating module and acquires grid, as shown in figure 3, building
Method is as follows
(4-1) finds out all planes in pre- reconstruct entity, including building outfield scape (exterior wall, roof, ground etc.) and internal field
Scape (corridor, corridor, elevator, in room, underground parking, various electricity-water-gas piping lanes etc.);
(4-2) according to building scene acquisition resolution demand, from plane near subaerial side, according to camera parameter according to
The secondary each perspective plane of determination makes the uniform overlay planes in the perspective plane of compound eye camera, guarantees to be greater than 50% overlapping between perspective plane
Rate;
The perspective plane of (4-3) every height eye is that a rectangle acquires grid, and the inside and outside scene all areas of whole building are all adopted
Collect grid covering, composition building acquires grid system, the corresponding sub- eye shooting area of each grid, real-time update grid regions
The content in domain;
(5) mission planning and optimization are carried out by task allocating module, as shown in figure 4, processing workflow is as follows
All intersections that (5-1) finds out pre- reconstruct entity make the projected centre point of compound eye camera to set out near subaerial point
On intersection, while guaranteeing the Duplication between perspective plane greater than 50%;
The pose that (5-2) passes through the projection centre reverse camera of compound eye camera;
(5-3) extra shooting pose is eliminated: the camera pose found out in step (5-2) will appear following two problem: situation 1,
Two or more cameras, camera site are close;Situation 2, two or more cameras, camera site and shooting angle
It spends all close;The spatial position of adjacent cameras closely will affect the precision of three-dimensional scenic reconstruct excessively, and it is superfluous to allow camera quantity to generate
Remaining, both of these case belongs to extra pose, and the method for eliminating extra pose is as follows: situation 1: camera being abstracted as a point, is clapped
It acts as regent and sets similar point as point set, retention point is concentrated with a distance from other points and the smallest point, leaves out other points, with retention point
Camera site of the camera site as camera needs the projection put according to other since the shooting angle of original camera is not close
Shooting angle is recalculated at center;Situation 2: using shooting point similar in compound eye camera position and shooting angle as a point set,
Retention point is concentrated with a distance from other points and the smallest point, leaves out other points, using the camera site of retention point and shooting angle as
The camera site of compound eye camera and shooting angle do not need to calculate shooting angle again since shooting angle is close;
The distribution of (5-4) task, according to the quantity of compound eye camera, by genetic algorithm and Dijkstra's algorithm, guarantee it is mobile away from
Moving distance from longest compound eye camera is optimal, is that every compound eye camera distributes task;
(6) according to task allocation result, compound eye camera is carried using unmanned plane/dirigible in the air and is hovered to designated position, ground makes
Compound eye camera, which to be carried, with all-terrain vehicle is fixed on designated position, compound eye camera is carried to designated position using tripod in interior, into
The calibration of row compound eye camera spatial pose, geographical location calibration, unified clock calibration;
(7) implement shooting, synchronization shooting of each compound eye camera under unified clock, it is consistent to meet space-time for data at this time
Property, photographed data is returned, reconstructs the 3-dimensional digital scene under synchronization automatically by computer system, virtual compound eye presses dynamic field
Scape frame per second requires to take the photograph every (1/ frame per second) second beats, such as frame per second requires to be 25fps, then shooting interval is set as 1/25 second, realizes
The dynamic shooting refreshed in real time.
In the above-described embodiments, the pose of the projection centre reverse camera described in step (5-2) by camera is specific
The step of it is as follows: be the parallel lines for the normal direction that starting point does reconstruction plane where projected centre point by projected centre point, with
Starting point is direction far from object, is the center of compound eye camera along the point that parallel lines distance is the object distance defaulted when shooting;Inspection
It surveys whether compound eye camera does not collide with other objects, while whether in limitation space, (not colliding and limiting if YES
In space processed), then it is assumed that the camera site of compound eye camera is the point, and shooting angle is the angle of parallel lines and 3 coordinate planes;
Otherwise, then it moves the center of compound eye camera along parallel lines direction, detects again, find out the center that detection passes through
Point moves up and down center if do not found along center in a certain range, can pass through detection until finding
Central point;At this time the position of compound eye camera be eventually by detection center position, shooting angle be by eventually by
The angle of the central point of detection and the straight line of projection centre and coordinate surface.
In the above-described embodiments, the modeling pattern of genetic algorithm described in step (5-4) is: with the quantity of compound eye camera
For the item number of gene, indicated so that whether each compound eye camera includes camera site point, whether gene expresses, with each genome
Middle compound eye camera longest moving distance is to judge the foundation of genome quality, and the distance the short then better;Wherein obtain every gene
Moving distance when expression carries out the optimization in path by Dijkstra's algorithm, calculate by the paths of all the points away from
It is short from most.
The content being not described in detail in this specification belongs to the prior art well known to those skilled in the art.
Examples detailed above of the invention only examples made by the present invention to clearly illustrate, rather than embodiments of the present invention
It limits.For those of ordinary skill in the art, other different forms can also be made on the basis of the above description
Variation or variation.Here all embodiments can not be exhaustive.All technical solution of the present inventions that belongs to are amplified
Obvious changes or variations out are still in the scope of protection of the present invention.
Claims (6)
1. a kind of virtual compound eye camera system for the building three-dimensional scenic acquisition for meeting space-time consistency, it is characterized in that: including
Data acquisition module, locating module and task allocating module;
The data acquisition module is used to acquire the picture of specific position and special angle, and collected data are passed through wirelessly
Mode is passed back in real time, for reconstructing threedimensional model;Data acquisition module is assisted by all compound eye cameras towards building target body
With composition, by all compound eye cameras of object-oriented body according to the acquisition grid planning of set building, virtual group collection is one complete
Whole and architecture compound eye system, referred to as virtual compound eye;Virtual compound eye is acquired by multiple compound eye cameras according to set building
Grid planning, mutually collaboration, common establishment, each compound eye camera possess a plurality of lenses, and an independent camera lens is referred to as sub- eye;Institute
There is camera lens to implement data acquisition according to the regulation of unified clock, obtains the data with space-time consistency;
The locating module includes GPS locator and UWB positioning system;
The GPS locator is mounted in each compound eye camera, for receiving GPS positioning signal, determines each compound eye camera and shooting
The global coordinates in region;
Accurate positioning of the UWB positioning system for a certain compound eye camera in building acquisition grid;UWB positioning system includes
Base station and label, base station are mounted on around photographic subjects, and label is mounted in each compound eye camera;The UWB location system tag
Emit pulse according to certain frequency, each label has unique ID;
The UWB positioning system base station is used to receive the UWB pulse of label transmitting, measures the distance of label and base station;It is more when having
When a base station, the position of outgoing label can be positioned by calculating;
The task allocating module is reconstructed the big of building entity according to determined by set building acquisition grid planning
Small, shape, the precision for shooting the limitation in space, reconstruction model calculate each compound eye camera in set building acquisition grid rule
Occupy-place node in drawing, determines camera site, posture and parameter, determines that each compound eye camera executes the sub- eye of acquisition tasks, and
These data are transferred to compound eye camera;The module is at regular intervals to compound eye camera sending time calibration command, pose school
Quasi- order and occupy-place calibration command carry out clock alignment, pose calibration and occupy-place calibration.
2. the virtual compound eye camera system of the building three-dimensional scenic acquisition according to claim 1 for meeting space-time consistency
System, it is characterized in that: the compound eye camera is that one kind possesses a plurality of lenses, can along the horizontal plane 360 ° and 360 ° of vertical plane acquire simultaneously
The equipment of image receives the shooting order of host computer for acquiring image data, and by the image data of shooting, compound eye camera
Position and posture information pass host computer back;Each compound eye camera acquires grid according to set building and plans occupy-place networking, is uniting
Synthetic operation under one clock control, receives unified allocation of resources.
3. the virtual compound eye camera system of the building three-dimensional scenic acquisition according to claim 1 for meeting space-time consistency
System, it is characterized in that: the task allocating module calculates the camera head angle for needing to adjust, boat angle and horizontal angle;Adjust camera
Holder makes compound eye camera keep shooting posture;Each sub- eye acquisition parameters inside compound eye camera are adjusted, control compound eye camera is clapped
According to;Wherein, for keeping compound eye camera stabilization, fine tuning compound eye camera position, compound eye camera passes through camera head and pacifies camera head
On carrier.
4. a kind of virtual compound eye camera system for the building three-dimensional scenic acquisition for meeting space-time consistency as described in claim 1
The working method of system, it is characterized in that method includes the following steps:
(1) GPS locator is installed in compound eye camera, determines the global coordinates of construction zone;Set up UWB positioning system, choosing
Take suitable position installation UWB positioning system base station;Three-dimensional localization needs at least six base station;It is installed in each compound eye camera
One UWB label;
(2) pre- shooting, outdoor to fly shooting around building target body ring by unmanned plane or dirigible carrying compound eye camera, interior passes through
Tripod or hand-held holder carry compound eye camera and record interior scene, control compound eye camera record limitation space boundary point, and return
Primary data is to task allocating module, and pre- photographed data is for constructing building acquisition grid planning;
(3) pre- photographed data is pre-processed by task allocating module, by binocular vision technology, calculates building side
The three-dimensional coordinate of boundary's point handles workflow with corner come the abstract size, shape and inner frame for being reconstructed building entity
It is as follows:
(3-1) carries out preliminary image procossing to the picture shot in advance;
(3-2) extracts the Origin And Destination of the edge line segment in picture;
The edge line segment Origin And Destination pair that (3-3) selective extraction comes out;
(3-4) carries out the matching of the starting point of edge line segment in adjacent picture, finds out of the Origin And Destination of edge line segment
With region;
(3-5) calculates the three-dimensional coordinate of edge line segment Origin And Destination by binocular vision algorithm, constitutes virtual three-dimensional space
In line segment;
(3-6) removes the line segment not being connected;
(3-7) selects two or more connected line segments, it is made to connect into plane;It is added if the line segment of selection can not be closed new
Line segment, make its plane be closed;
(3-8) repeats step (3-7), obtains the pre- reconstruction of objects being made of plane and plane intersecting lens;
(4) according to the entity of pretreatment reconstruct, building is constructed by task allocating module and acquires grid, construction method is as follows:
(4-1) finds out all planes and facade in pre- reconstruct entity, including building outfield scape and interior scene;
(4-2) according to building scene acquisition resolution demand, from plane near subaerial side, according to camera parameter according to
The secondary each perspective plane of determination makes the uniform overlay planes in the perspective plane of compound eye camera, guarantees to be greater than 50% overlapping between perspective plane
Rate;
The perspective plane of (4-3) every height eye is that a rectangle acquires grid, and the inside and outside scene all areas of whole building are all adopted
Collect grid covering, composition building acquires grid system, the corresponding sub- eye shooting area of each grid, real-time update grid regions
The content in domain;
(5) mission planning and optimization are carried out by task allocating module, processing workflow is as follows:
All intersections that (5-1) finds out pre- reconstruct entity make the projected centre point of compound eye camera to set out near subaerial point
On intersection, while guaranteeing the Duplication between perspective plane greater than 50%;
The pose that (5-2) passes through the projection centre reverse camera of compound eye camera;
(5-3) extra shooting pose is eliminated: the camera pose found out in step (5-2) will appear following two problem: situation 1,
Two or more cameras, camera site are close;Situation 2, two or more cameras, camera site and shooting angle
It spends all close;
The spatial position of adjacent cameras closely will affect the precision of three-dimensional scenic reconstruct excessively, and camera quantity is allowed to generate redundancy, this
Two kinds of situations belong to extra pose, and the method for eliminating extra pose is as follows:
Situation 1: camera is abstracted as a point, the similar point in camera site is used as point set, and retention point is concentrated with a distance from other points
With the smallest point, leave out other points, using the camera site of retention point as the camera site of camera, due to the shooting of original camera
Angle is not close, needs the projection centre put according to other, recalculates shooting angle;
Situation 2: using shooting point similar in compound eye camera position and shooting angle as a point set, retention point is concentrated from other points
Distance and the smallest point, leave out other points, using the camera site of retention point and shooting angle as the camera site of compound eye camera
It does not need to calculate shooting angle again since shooting angle is close with shooting angle;
The distribution of (5-4) task, according to the quantity of compound eye camera, by genetic algorithm and Dijkstra's algorithm, guarantee it is mobile away from
Moving distance from longest compound eye camera is optimal, is that every compound eye camera distributes task;
(6) according to task allocation result, compound eye camera is carried using unmanned plane/dirigible in the air and is hovered to designated position, ground makes
Compound eye camera, which to be carried, with all-terrain vehicle is fixed on designated position, compound eye camera is carried to designated position using tripod in interior, into
The calibration of row compound eye camera spatial pose, geographical location calibration, unified clock calibration;
(7) implement shooting, each compound eye camera is shot simultaneously according to unified clock, and data meet space-time consistency at this time, passback
Photographed data, the 3-dimensional digital scene under synchronization is reconstructed by computer system automatically, and virtual compound eye presses dynamic scene frame per second
It is required that taking the photograph every (1/ frame per second) second beats, the dynamic shooting refreshed in real time is realized.
5. the virtual compound eye camera system of the building three-dimensional scenic acquisition according to claim 4 for meeting space-time consistency
Working method, it is characterized in that: the pose of the projection centre reverse camera described in step (5-2) by camera specifically walks
It is rapid as follows: it is the parallel lines for the normal direction that starting point does projected centre point place reconstruction plane by projected centre point, with starting point,
It is direction far from object, is the center of compound eye camera along the point that parallel lines distance is the object distance defaulted when shooting;Detection is multiple
Whether whether eye camera does not collide with other objects, while in limitation space, if YES, then it is assumed that the shooting of compound eye camera
Position is the point, and shooting angle is the angle of parallel lines and 3 coordinate planes;Otherwise, then preferentially make the centre bit of compound eye camera
It sets and is moved along parallel lines direction, detected again, find out the central point that detection passes through, if do not found along center,
Center is moved up and down in certain range, it can be by the central point of detection until finding;The position of compound eye camera is at this time
Eventually by the center position of detection, shooting angle be by the straight line of central point and projection centre eventually by detection with
The angle of coordinate surface.
6. the virtual compound eye camera system of the building three-dimensional scenic acquisition according to claim 4 for meeting space-time consistency
Working method, it is characterized in that: the modeling pattern of genetic algorithm described in step (5-4) is: being with the quantity of compound eye camera
The item number of gene indicates whether gene expresses, in each genome so that whether each compound eye camera includes camera site point
Compound eye camera longest moving distance is to judge the foundation of genome quality, and the distance the short then better;Wherein obtain every gene table
Up to when moving distance the optimization in path is carried out by Dijkstra's algorithm, calculate the distance in the path by all the points
It is most short.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810865682.0A CN109118585B (en) | 2018-08-01 | 2018-08-01 | Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810865682.0A CN109118585B (en) | 2018-08-01 | 2018-08-01 | Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109118585A true CN109118585A (en) | 2019-01-01 |
CN109118585B CN109118585B (en) | 2023-02-10 |
Family
ID=64863911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810865682.0A Active CN109118585B (en) | 2018-08-01 | 2018-08-01 | Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109118585B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109819453A (en) * | 2019-03-05 | 2019-05-28 | 西安电子科技大学 | Cost optimization unmanned plane base station deployment method based on improved adaptive GA-IAGA |
CN110675484A (en) * | 2019-08-26 | 2020-01-10 | 武汉理工大学 | Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera |
CN110824461A (en) * | 2019-11-18 | 2020-02-21 | 广东博智林机器人有限公司 | Positioning method |
CN110995967A (en) * | 2019-11-22 | 2020-04-10 | 武汉理工大学 | Virtual compound eye construction system based on variable flying saucer airship |
CN111028274A (en) * | 2019-11-28 | 2020-04-17 | 武汉理工大学 | Smooth curved surface mesh traceless division-oriented projection marking system and working method thereof |
CN111031259A (en) * | 2019-12-17 | 2020-04-17 | 武汉理工大学 | Inward type three-dimensional scene acquisition virtual compound eye camera |
CN111192362A (en) * | 2019-12-17 | 2020-05-22 | 武汉理工大学 | Virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene and working method thereof |
CN111536913A (en) * | 2020-04-24 | 2020-08-14 | 芜湖职业技术学院 | House layout graph measuring device and measuring method thereof |
WO2022000210A1 (en) * | 2020-06-29 | 2022-01-06 | 深圳市大疆创新科技有限公司 | Method and device for analyzing target object in site |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN112070818A (en) * | 2020-11-10 | 2020-12-11 | 纳博特南京科技有限公司 | Robot disordered grabbing method and system based on machine vision and storage medium |
-
2018
- 2018-08-01 CN CN201810865682.0A patent/CN109118585B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN112070818A (en) * | 2020-11-10 | 2020-12-11 | 纳博特南京科技有限公司 | Robot disordered grabbing method and system based on machine vision and storage medium |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109819453A (en) * | 2019-03-05 | 2019-05-28 | 西安电子科技大学 | Cost optimization unmanned plane base station deployment method based on improved adaptive GA-IAGA |
CN109819453B (en) * | 2019-03-05 | 2021-07-06 | 西安电子科技大学 | Cost optimization unmanned aerial vehicle base station deployment method based on improved genetic algorithm |
CN110675484A (en) * | 2019-08-26 | 2020-01-10 | 武汉理工大学 | Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera |
CN110824461A (en) * | 2019-11-18 | 2020-02-21 | 广东博智林机器人有限公司 | Positioning method |
CN110824461B (en) * | 2019-11-18 | 2021-10-22 | 广东博智林机器人有限公司 | Positioning method |
CN110995967B (en) * | 2019-11-22 | 2020-11-03 | 武汉理工大学 | Virtual compound eye construction system based on variable flying saucer airship |
CN110995967A (en) * | 2019-11-22 | 2020-04-10 | 武汉理工大学 | Virtual compound eye construction system based on variable flying saucer airship |
CN111028274A (en) * | 2019-11-28 | 2020-04-17 | 武汉理工大学 | Smooth curved surface mesh traceless division-oriented projection marking system and working method thereof |
CN111192362A (en) * | 2019-12-17 | 2020-05-22 | 武汉理工大学 | Virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene and working method thereof |
CN111031259B (en) * | 2019-12-17 | 2021-01-19 | 武汉理工大学 | Inward type three-dimensional scene acquisition virtual compound eye camera |
CN111031259A (en) * | 2019-12-17 | 2020-04-17 | 武汉理工大学 | Inward type three-dimensional scene acquisition virtual compound eye camera |
CN111192362B (en) * | 2019-12-17 | 2023-04-11 | 武汉理工大学 | Working method of virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene |
CN111536913A (en) * | 2020-04-24 | 2020-08-14 | 芜湖职业技术学院 | House layout graph measuring device and measuring method thereof |
CN111536913B (en) * | 2020-04-24 | 2022-02-15 | 芜湖职业技术学院 | House layout graph measuring device and measuring method thereof |
WO2022000210A1 (en) * | 2020-06-29 | 2022-01-06 | 深圳市大疆创新科技有限公司 | Method and device for analyzing target object in site |
Also Published As
Publication number | Publication date |
---|---|
CN109118585B (en) | 2023-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109118585A (en) | A kind of virtual compound eye camera system and its working method of the building three-dimensional scenic acquisition meeting space-time consistency | |
CN107504957B (en) | Method for rapidly constructing three-dimensional terrain model by using unmanned aerial vehicle multi-view camera shooting | |
KR101220527B1 (en) | Sensor system, and system and method for preparing environment map using the same | |
CN106485785B (en) | Scene generation method and system based on indoor three-dimensional modeling and positioning | |
CN108648272A (en) | Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device | |
CN105446350B (en) | Self-movement robot moves boundary demarcation method | |
CN110505463A (en) | Based on the real-time automatic 3D modeling method taken pictures | |
CN103874193B (en) | A kind of method and system of mobile terminal location | |
CN104217439B (en) | Indoor visual positioning system and method | |
CN110287519A (en) | A kind of the building engineering construction progress monitoring method and system of integrated BIM | |
JP6080642B2 (en) | 3D point cloud analysis method | |
WO2019018315A1 (en) | Aligning measured signal data with slam localization data and uses thereof | |
CN112461210B (en) | Air-ground cooperative building surveying and mapping robot system and surveying and mapping method thereof | |
CN106443687A (en) | Piggyback mobile surveying and mapping system based on laser radar and panorama camera | |
CN105928498A (en) | Determination Of Object Data By Template-based Uav Control | |
CN105204505A (en) | Positioning video acquiring and drawing system and method based on sweeping robot | |
KR20140049361A (en) | Multiple sensor system, and apparatus and method for three dimensional world modeling using the same | |
CN108846867A (en) | A kind of SLAM system based on more mesh panorama inertial navigations | |
CN106153050A (en) | A kind of indoor locating system based on beacon and method | |
CN109773783B (en) | Patrol intelligent robot based on space point cloud identification and police system thereof | |
WO2016184255A1 (en) | Visual positioning device and three-dimensional mapping system and method based on same | |
CN111192362B (en) | Working method of virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene | |
CN105243637A (en) | Panorama image stitching method based on three-dimensional laser point cloud | |
CN111141264B (en) | Unmanned aerial vehicle-based urban three-dimensional mapping method and system | |
CN108803667A (en) | A kind of unmanned plane synergic monitoring and tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |