CN109118585B - Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof - Google Patents

Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof Download PDF

Info

Publication number
CN109118585B
CN109118585B CN201810865682.0A CN201810865682A CN109118585B CN 109118585 B CN109118585 B CN 109118585B CN 201810865682 A CN201810865682 A CN 201810865682A CN 109118585 B CN109118585 B CN 109118585B
Authority
CN
China
Prior art keywords
compound eye
shooting
eye camera
building
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810865682.0A
Other languages
Chinese (zh)
Other versions
CN109118585A (en
Inventor
王汉熙
蒋靳
郑晓钧
胡佳文
王申奥
江南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201810865682.0A priority Critical patent/CN109118585B/en
Publication of CN109118585A publication Critical patent/CN109118585A/en
Application granted granted Critical
Publication of CN109118585B publication Critical patent/CN109118585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06312Adjustment or analysis of established resource schedule, e.g. resource or task levelling, or dynamic rescheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/003Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • Evolutionary Computation (AREA)
  • Educational Administration (AREA)
  • Genetics & Genomics (AREA)
  • Game Theory and Decision Science (AREA)
  • Multimedia (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)

Abstract

The invention relates to the field of three-dimensional digital scene construction, and provides a virtual compound eye camera system for building three-dimensional scene acquisition meeting space-time consistency and a working method thereof, wherein the system comprises a data acquisition module, a positioning module and a task allocation module, the data acquisition module is formed by cooperating all compound eye cameras facing a building target body, all compound eye cameras facing the target body are planned according to a set building acquisition grid, and a virtual group is a complete and systematic compound eye system called virtual compound eye; the virtual compound eye is planned by a plurality of compound eye cameras according to the acquisition grid of the established building, and is mutually cooperated and jointly established. The virtual compound eye camera system and the working method thereof can realize the real-time shooting and real-time production of space-time consistency, and obtain more accurate and real dynamic three-dimensional virtual scenes.

Description

Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof
Technical Field
The invention relates to the field of three-dimensional digital scene construction, in particular to a virtual compound eye camera system for acquiring three-dimensional scenes (internal and external scenes) of buildings and meeting space-time consistency and a working method thereof.
Background
The oblique projection shooting is carried out by collecting images at different angles and carrying out later image splicing to obtain a three-dimensional virtual scene which accords with human vision. At present, a lens structure of a camera system for oblique projection shooting of a three-dimensional scene mainly has five lenses, two lenses, a single lens and other forms. The space angle between the five lenses is fixed and unchanged, and a scene is shot by a single-shaft holder; the single-lens camera drives the lens to be aligned with a photographing target by a two-axis or three-axis pan-tilt.
In the collection application, a single-lens or multi-lens digital camera is carried by carriers such as an unmanned aerial vehicle, and multi-angle all-round shooting is carried out around an entity scene. Multi-lens cameras typically take shots of the acquisition along a planned flight path, and single-lens cameras are used for manually controlling fly-shoot acquisition or ground-held acquisition.
For the collection of objects in buildings, the current collection scheme is that a single-lens camera or a multi-lens camera carried by a carrier (unmanned aerial vehicle, all-terrain vehicle, etc.) makes continuous cruising movement around a target body, and the camera continuously shoots in the cruising movement process.
It has been found through practice that the following problems exist with the existing acquisition schemes:
1. the acquired data does not have space-time consistency, and the constructed three-dimensional scene lacks dynamic credibility based on the space-time consistency. The time-space consistency shooting refers to that original data are shot at the same moment under a unified clock, and the time for obtaining the image is consistent with the spatial position and the posture of each object in the image at the moment. The single camera does not have space-time consistency when continuously shooting, images at different space positions are shot at different time nodes, time intervals exist between the two shot images, the whole shooting process needs dozens of minutes to months according to the size of a scene, and finally, a three-dimensional scene spliced by the images at different discontinuities is obtained, and a plurality of dynamic objects cannot be shot or shot for many times, so that a three-dimensional virtual scene which is not in accordance with the actual scene is obtained.
2. At present, scenes except for building scenes are mainly used, and a real three-dimensional effect of integrating the internal scene and the external scene is not built.
3. The shooting points of the camera are not reasonably planned, the shooting process of the camera mainly depends on artificial flight control, or simple flight route planning is carried out according to the flight area through flight control software, so that a large amount of data redundancy or partial data loss is caused.
4. The subjectivity of the collector in the collection operation is strong, a scientific, standardized and quantitative collection scheme is difficult to establish, the stability of the collection quality is difficult to maintain, and the post-production efficiency is difficult to improve.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a virtual compound eye camera system for building three-dimensional scene acquisition and a working method thereof, which meet the requirement of space-time consistency, so that real-time shooting and real-time manufacturing of space-time consistency are realized, and a more accurate and real dynamic three-dimensional virtual scene is obtained.
The object of the invention is achieved by the following technical measures.
A virtual compound eye camera system for collecting three-dimensional scenes of buildings meeting the space-time consistency comprises a data collecting module, a positioning module and a task distributing module,
the data acquisition module is used for acquiring pictures at a specific position and a specific angle, transmitting the acquired data back in real time in a wireless mode and reconstructing a three-dimensional model, and is formed by collaboratively combining all compound eye cameras facing a building target body, and virtually integrating all compound eye cameras facing the target body into a complete and systematic compound eye system called virtual compound eye according to a set building acquisition grid plan; the virtual compound eye is planned by a plurality of compound eye cameras according to a set building acquisition grid, and the compound eye cameras are mutually cooperated and jointly built, each compound eye camera is provided with a plurality of lenses, and an independent lens is called as a sub-eye; all the lenses carry out data acquisition according to a unified clock, so that data with space-time consistency are obtained;
the positioning module comprises a GPS positioner and a UWB positioning system;
the GPS localizer is arranged in each compound eye camera and is used for receiving GPS locating signals and determining the global coordinates of the compound eye cameras and the shooting area;
the UWB positioning system is used for accurately positioning a compound eye camera in a building acquisition grid (a floor, a direction, a room position and a specific point position in the room position), satellite signals can be seriously attenuated indoors, the UWB positioning system can reach centimeter-level precision in a building area, and three-dimensional positioning can be realized; the UWB positioning system comprises a base station and tags, wherein the base station is arranged around a shooting target, and the tags are arranged in compound eye cameras; the UWB positioning system label transmits pulse according to a certain frequency, and each label has a unique ID; the UWB positioning system base station is used for receiving UWB pulses transmitted by the tag and measuring the distance between the tag and the base station; when a plurality of base stations exist, the positions of the positioning labels are accurately calculated through an algorithm;
the task allocation module calculates the occupation node of each compound eye camera in the established building acquisition grid plan according to the size and the shape of the reconstructed building entity determined by the established building acquisition grid plan, the limitation of the shooting space and the accuracy of the reconstruction model, determines the shooting position, the shooting posture and the shooting parameters, determines the sub-eyes of each compound eye camera for executing the acquisition task, and transmits the data to the compound eye cameras; the module sends a time calibration command, a pose calibration command and an occupation calibration command to the compound eye camera at intervals to perform clock calibration, pose calibration and occupation calibration.
In the above technical solution, the compound eye camera is a device which has a plurality of lenses, can simultaneously acquire images along 360 ° of a horizontal plane and 360 ° of a vertical plane, and is used for acquiring picture data; receiving a shooting command of the upper computer, and transmitting shot picture data and position and posture information of the compound eye camera back to the upper computer; each compound eye camera plans an occupation network according to the collection grid of the established building, operates cooperatively under the control of a unified clock and receives unified allocation.
In the technical scheme, the task allocation module calculates the angle, the navigation angle and the horizontal angle of the camera holder to be adjusted; adjusting a camera holder to enable the compound eye camera to keep a shooting posture; and adjusting the shooting parameters of each sub-eye in the compound eye camera, and controlling the compound eye camera to shoot. The compound eye camera holder is a supporting/hanging/lifting/side-shifting device for fixing the compound eye camera, and is used for keeping the compound eye camera stable, finely adjusting the position of the compound eye camera, and preventing/isolating/reducing vibration; the tripod head structure is provided with a stepping motor and a connecting rod bracket, so that the tripod head can rotate in the horizontal and vertical directions or partially laterally move, and the shooting angle of the compound-eye camera is further finely adjusted; the cloud platform is installed on the carrier (unmanned aerial vehicle, dirigible, all-terrain vehicle, tripod, stores pylon, etc.), and compound eye camera is fixed on the cloud platform.
The invention also provides a working method of the virtual compound eye camera system for building three-dimensional scene acquisition meeting the space-time consistency, which comprises the following steps:
(1) Installing a GPS (global positioning system) locator on the compound eye camera to determine the global coordinates of the area of the building; building a UWB positioning system, and selecting a proper position to install a UWB positioning system base station; three-dimensional positioning requires at least 6 base stations; installing a UWB tag on each compound eye camera;
(2) Pre-shooting, carrying a compound eye camera by an unmanned aerial vehicle or an airship outdoors to perform annular flying shooting around a building target, carrying a compound eye camera by a tripod or a handheld tripod head indoors to record an inner scene, controlling the compound eye camera to record a limited space boundary point, returning initial data to a task allocation module, and using pre-shooting data to construct a building acquisition grid plan;
(3) The pre-shot data is pre-processed through a task allocation module, the three-dimensional coordinates of the boundary points of the building are calculated through a binocular vision technology, the size, the shape and the internal frame of the entity of the reconstructed building are abstracted through corners, and the processing work flow is as follows
(3-1) carrying out primary image processing (mainly edge sharpening) on the pre-shot picture;
(3-2) extracting a starting point and an end point of an edge line segment in the picture;
(3-3) selecting the starting point and the ending point pair of the extracted edge line segment;
(3-4) matching the starting points of the edge line segments in the adjacent pictures, and finding out the matching areas of the starting points and the end points of the edge line segments;
(3-5) calculating three-dimensional coordinates of a starting point and a finishing point of the edge line segment through a binocular vision algorithm to form a line segment in a virtual three-dimensional space;
(3-6) removing the line segments which are not connected;
(3-7) selecting more than two connected line segments to connect the two connected line segments into a plane; if the selected line segment can not be closed, adding a new line segment to close the plane of the new line segment;
(3-8) repeating the step (3-7) to obtain a pre-reconstruction object consisting of the intersection line of the plane and the plane;
(4) Building acquisition grids are constructed through a task allocation module according to the entity reconstructed by preprocessing, and the construction method is as follows
(4-1) finding all planes and facades in the pre-reconstruction entity, including external scenes (external walls, roofs, the ground and the like) and internal scenes (corridors, elevators, rooms, underground parking lots, various electric and gas pipe corridors and the like) of the building;
(4-2) according to the requirement of the building scene acquisition resolution, starting from the edge of the plane closest to the ground, sequentially determining each projection plane according to the camera parameters, enabling the projection planes of the compound eye camera to uniformly cover the plane, and ensuring that the overlap rate between the projection planes is more than 50%;
(4-3) the projection surface of each sub-eye is a rectangular acquisition grid, all areas of the inside and outside scenes of the whole building are covered by the acquisition grids to form a building acquisition grid system, each grid corresponds to a sub-eye shooting area, and the content of the grid area is updated in real time;
(5) The task is planned and optimized through the task distribution module, and the processing working flow is as follows
(5-1) finding out all intersecting lines of the pre-reconstruction entity, starting from a point closest to the ground, enabling the projection center point of the compound eye camera to be located on the intersecting lines, and simultaneously ensuring that the overlapping rate between projection surfaces is more than 50%;
(5-2) reversely solving the pose of the camera through the projection center of the compound eye camera;
(5-3) redundant shooting pose elimination: the camera pose calculated in the step (5-2) has the following two problems: in case 1, two or more cameras have similar shooting positions; in case 2, the shooting positions and shooting angles of two or more cameras are close; the accuracy of three-dimensional scene reconstruction can be influenced by the too close spatial positions of adjacent cameras, the number of the cameras is redundant, the two situations belong to redundant poses, and the method for eliminating the redundant poses is as follows: case 1: abstracting the camera into one point, taking the point with close shooting position as a point set, keeping the distance and the minimum point from other points in the point set, deleting other points, and taking the shooting position of the reserved point as the shooting position of the camera; case 2: taking a shooting point with a compound eye camera position close to a shooting angle as a point set, reserving a point with the minimum distance and the minimum distance from other points in the point set, deleting other points, taking the shooting position and the shooting angle of the reserved point as the shooting position and the shooting angle of the compound eye camera, and not calculating the shooting angle again because the shooting angle is close;
(5-4) task allocation, namely, according to the number of compound eye cameras, ensuring the optimal moving distance of the compound eye camera with the longest moving distance through a genetic algorithm and a Dijkstra algorithm, and allocating a task to each compound eye camera;
(6) According to task distribution results, an unmanned aerial vehicle/airship carrying compound eye camera is used for hovering to a specified position in the air, an all-terrain vehicle carrying compound eye camera is used on the ground to fix the compound eye camera at the specified position, a tripod is used for carrying the compound eye camera indoors to the specified position, and compound eye camera spatial pose calibration, geographic position calibration and unified clock calibration are carried out;
(7) And (3) performing shooting, simultaneously shooting by each compound eye camera according to a unified clock, transmitting shot data at the moment when the data meet the space-time consistency, automatically reconstructing a three-dimensional digital scene at the same moment by a computer system, and shooting by virtual compound eyes at intervals of (1/frame rate) seconds according to the frame rate requirement of a dynamic scene, wherein for example, the frame rate requirement is 25fps, the shooting interval is set to be 1/25 seconds, so that the dynamic shooting with real-time refreshing is realized.
In the above technical solution, the step of solving the pose of the camera back by the projection center of the camera in the step (5-2) is as follows: taking the projection central point as a starting point to be a parallel line in the normal direction of a reconstruction plane where the projection central point is located, taking the starting point and a distance away from the object as a direction, and taking a point with the distance along the parallel line as a default object distance during shooting as the central position of the compound eye camera; detecting whether the compound eye camera is not collided with other objects and is in a limited space, if so, considering the shooting position of the compound eye camera as the point, and the shooting angle is the included angle between the parallel line and 3 coordinate planes; if the central position is not found along the central position, the central position is moved up and down within a certain range until the central position which can pass the detection is found; at the moment, the position of the compound eye camera is the position of a central point which finally passes through detection, and the shooting angle is the included angle between a straight line which passes through the detected central point and the projection center and a coordinate plane.
In the above technical solution, the modeling manner of the genetic algorithm in step (5-4) is: the number of the compound eye cameras is taken as the number of the genes, whether each compound eye camera contains a shooting position point or not is taken as an indication, whether the genes are expressed or not is taken as an indication, the longest moving distance of the compound eye cameras in each genome is taken as a basis for judging whether the genome is good or bad, and the shorter the distance is, the better the genome is; the moving distance of each gene expression is obtained, the path is optimized through Dijkstra algorithm, and the shortest distance of the path passing through all the points is calculated.
The invention provides a concept of virtual compound eye, which has the following advantages compared with the prior art:
the data acquisition has space-time consistency, the reconstruction of a three-dimensional scene of a same time section is guaranteed, and dynamic three-dimensional scene acquisition can be realized.
And secondly, a fast reconstruction mode of the entity is provided, and the approximate structure of the reconstructed entity is obtained quickly through simple pre-shooting, so that redundant measurement is avoided.
Thirdly, a building collection grid division method is provided, compound eye camera shooting points are distributed according to collection grids, the pose of the compound eye camera is planned in advance, and data redundancy is reduced.
And fourthly, a limiting function of a shooting space is provided, the compound eye camera cannot collide with an object in the environment in the shooting process, and the safety is ensured.
Fifthly, the synchronous shooting time is guaranteed to be optimal through a genetic algorithm and a Dijkstra algorithm.
And sixthly, a scientific shooting and collecting scheme of the large dynamic three-dimensional digital scene with space-time consistency based on the virtual compound eye is provided, and the real-time refreshing of the three-dimensional scene is ensured.
Drawings
FIG. 1 is a flow chart of a building three-dimensional scene construction method of the present invention.
FIG. 2 is a flow chart of the preprocessing of the reconstructed entity size and shape according to the present invention.
Fig. 3 is a flow chart of building acquisition grid construction according to the present invention.
FIG. 4 is a flow chart of task planning and optimization according to the present invention.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings.
The embodiment provides a virtual compound eye camera system for acquiring a three-dimensional building scene, which meets the space-time consistency, and comprises a data acquisition module, a positioning module and a task allocation module.
The data acquisition module is used for acquiring pictures of specific positions and specific angles, transmitting the acquired data back in real time in a wireless mode for reconstructing the three-dimensional model; the data acquisition module is formed by cooperating all compound eye cameras facing to a building target body, and all compound eye cameras facing to the target body are planned according to a set building acquisition grid, and are virtually grouped into a complete and systematic compound eye system which is called as a virtual compound eye; the virtual compound eye is planned by a plurality of compound eye cameras according to a set building acquisition grid, and the compound eye cameras are mutually cooperated and jointly built, each compound eye camera is provided with a plurality of lenses, and a single lens is called as a sub-eye; all the lenses carry out data acquisition according to the provision of a unified clock to obtain data with space-time consistency; the carrier of the compound eye camera may be an unmanned aerial vehicle, an airship, an all-terrain vehicle, a tripod, a pylon, or the like. The single compound eye camera is a device which is provided with a plurality of lenses and can simultaneously acquire images along 360 degrees of a horizontal plane and 360 degrees of a vertical plane, and local space-time consistent shooting can be met for one compound eye camera. Meanwhile, a single compound eye camera can receive a shooting command of the upper computer and transmit shot picture data, position and posture information of the compound eye camera back to the upper computer; each compound eye camera plans an occupied network according to a set building acquisition grid, operates cooperatively under the control of a unified clock and receives unified allocation.
The positioning module comprises a GPS positioner and a UWB positioning system;
the GPS localizer is arranged in each compound eye camera and is used for receiving GPS locating signals and determining the global coordinates of each compound eye camera and a shooting area;
the UWB positioning system is used for accurately positioning a compound eye camera in a building acquisition grid (a floor, a direction, a room position and a specific point position in the room position), satellite signals can be seriously attenuated indoors, the UWB positioning system can reach centimeter-level precision in a building area, and three-dimensional positioning can be realized; the UWB positioning system comprises a base station and tags, wherein the base station is arranged around a shooting target, and the tags are arranged in compound eye cameras; the UWB positioning system label transmits pulse according to a certain frequency, and each label has a unique ID; the UWB positioning system base station is used for receiving UWB pulses transmitted by the tag and measuring the distance between the tag and the base station; when a plurality of base stations exist, the positions of the positioning labels are accurately calculated through an algorithm;
the task allocation module calculates occupation nodes of each compound eye camera in the established building acquisition grid plan according to the size, the shape, the shooting space limitation and the reconstruction model precision of the reconstructed building entity determined by the established building acquisition grid plan, determines shooting positions, postures and parameters, determines sub-eyes of each compound eye camera for executing acquisition tasks, and transmits the data to the compound eye cameras, and the module sends time calibration commands, pose calibration commands and occupation calibration commands to the compound eye cameras at intervals to perform clock calibration, pose calibration and occupation calibration.
The task allocation module is implemented by software running on a computer. The module mainly has the following functions: providing a virtual three-dimensional space, preprocessing the size and the shape of a reconstructed entity, constructing a building acquisition grid, and planning and optimizing a task.
1. Providing a virtual three-dimensional space
A virtual three-dimensional coordinate space is provided for placing point cloud data of an object, for defining a space for compound eye carrier motion, and simultaneously, the functions of collision detection, inclusion detection, point-to-arbitrary surface projection and the like of the object are provided.
2. Reconstruction entity size and shape pre-processing
Preprocessing the data acquired for the first time (namely the pre-shot data), calculating the three-dimensional coordinates of corner points by using a binocular vision technology, and abstracting the size and the shape of a reconstructed entity by using the corners.
3. Building acquisition grid construction
According to the entity of the preprocessing reconstruction, the size and the position of a camera projection plane are determined according to the building scene acquisition resolution requirement and the camera parameters, the projection plane of each sub-eye is a rectangular acquisition grid, and all areas of the internal scene and the external scene of the whole building are covered by the acquisition grids to form a building acquisition grid system.
4. Mission planning and optimization
And calculating data such as the shooting position, the camera posture and the like of each compound eye camera according to the grid planning result and the limitation of the precision and the space position of the reconstructed model.
In addition to the above functions, the task allocation module calculates a camera pan-tilt angle, a navigation angle and a horizontal angle which need to be adjusted; adjusting a camera holder to enable the compound eye camera to keep a shooting posture; and adjusting each sub-eye shooting parameter in the compound eye camera to control the compound eye camera to shoot. The compound eye camera holder is a supporting/hanging/lifting/side-shifting device for fixing the compound eye camera, and is used for keeping the compound eye camera stable, finely adjusting the position of the compound eye camera, and preventing/isolating/reducing vibration; the cradle head structure is provided with a stepping motor and a connecting rod bracket, so that the cradle head can rotate in the horizontal and vertical directions or make partial lateral movement, and the shooting angle of the compound eye camera is further finely adjusted; the cloud platform is installed on the carrier (unmanned aerial vehicle, dirigible, all-terrain vehicle, tripod, stores pylon, etc.), and compound eye camera is fixed on the cloud platform.
In order to ensure good shooting quality during shooting, a shot object and a shooting plane are overlapped as far as possible, and the spatial positions of adjacent cameras are far as possible in order to improve reconstruction accuracy.
The embodiment also provides a working method of the virtual compound eye camera system for building three-dimensional scene acquisition meeting the space-time consistency, as shown in fig. 1, the method comprises the steps of:
(1) Installing a GPS (global positioning system) locator on the compound eye camera to determine the global coordinates of the area of the building; building a UWB positioning system, and selecting a proper position to install a UWB positioning system base station; three-dimensional positioning requires at least 6 base stations; installing a UWB tag on each compound eye camera;
(2) Pre-shooting, carrying a compound eye camera by an unmanned aerial vehicle or an airship outdoors to perform annular flying shooting around a building target, carrying a compound eye camera by a tripod or a handheld tripod head indoors to record an inner scene, controlling the compound eye camera to record a limited space boundary point, returning initial data to a task allocation module, and using pre-shooting data to construct a building acquisition grid plan;
(3) Preprocessing the pre-shot data through a task allocation module, calculating three-dimensional coordinates of corner points through a binocular vision technology, and abstracting the size, shape and internal frame of a reconstructed entity by the corners, as shown in fig. 2, the processing work flow is as follows
(3-1) carrying out primary image processing (mainly edge sharpening) on the pre-shot picture;
(3-2) extracting a starting point and an end point of an edge line segment in the picture;
(3-3) selecting the starting point and the end point pairs of the extracted edge line segments;
(3-4) matching the starting points of the edge line segments in the adjacent pictures, and finding out the matching areas of the starting points and the end points of the edge line segments;
(3-5) calculating three-dimensional coordinates of a starting point and a finishing point of the edge line segment through a binocular vision algorithm to form a line segment in a virtual three-dimensional space;
(3-6) removing the line segments which are not connected;
(3-7) selecting more than two connected line segments to connect the two connected line segments into a plane; if the selected line segment can not be closed, adding a new line segment to close the plane of the new line segment;
(3-8) repeating the step (3-7) to obtain a pre-reconstruction object consisting of the intersection line of the plane and the plane;
(4) Building acquisition grids are constructed through the task allocation module according to the entity reconstructed by preprocessing, as shown in FIG. 3, the construction method is as follows
(4-1) finding all planes in the pre-reconstruction entity, including the external scene (outer wall, roof, ground and the like) and the internal scene (corridor, elevator, indoor, underground parking lot, various electric and gas pipe gallery and the like) of the building;
(4-2) according to the requirement of the building scene acquisition resolution, starting from the edge of the plane closest to the ground, sequentially determining each projection plane according to the camera parameters, enabling the projection planes of the compound eye camera to uniformly cover the plane, and ensuring that the overlap rate between the projection planes is more than 50%;
(4-3) the projection surface of each sub-eye is a rectangular acquisition grid, all areas of the inside and outside scenes of the whole building are covered by the acquisition grids to form a building acquisition grid system, each grid corresponds to a sub-eye shooting area, and the content of the grid area is updated in real time;
(5) The task planning and optimization are carried out through the task allocation module, as shown in FIG. 4, the processing workflow is as follows
(5-1) finding out all intersecting lines of the entity to be reconstructed, starting from a point closest to the ground, enabling the projection center point of the compound eye camera to be located on the intersecting lines, and simultaneously ensuring that the overlapping rate of projection surfaces is more than 50%;
(5-2) reversely solving the pose of the camera through the projection center of the compound eye camera;
(5-3) redundant shooting pose elimination: the camera pose calculated in the step (5-2) has the following two problems: in case 1, two or more cameras have similar shooting positions; case 2, two or more cameras, the shooting positions and the shooting angles are both close; the accuracy of three-dimensional scene reconstruction is influenced by the fact that the space positions of adjacent cameras are too close, redundancy is generated in the number of the cameras, the two situations belong to redundant poses, and the method for eliminating the redundant poses is as follows: case 1: abstracting a camera into one point, taking the point with close shooting positions as a point set, keeping the distance between the point set and other points and the point with the minimum distance, deleting other points, and taking the shooting position of the reserved point as the shooting position of the camera; case 2: taking a shooting point with a compound eye camera position close to a shooting angle as a point set, reserving a point with the minimum distance and the minimum distance from other points in the point set, deleting other points, and taking the shooting position and the shooting angle of the reserved point as the shooting position and the shooting angle of the compound eye camera, wherein the shooting angle does not need to be calculated again due to the close shooting angle;
(5-4) task allocation, namely, according to the number of compound eye cameras, ensuring the optimal moving distance of the compound eye camera with the longest moving distance through a genetic algorithm and a Dijkstra algorithm, and allocating a task to each compound eye camera;
(6) According to task distribution results, an unmanned aerial vehicle/airship carrying compound eye camera is used for hovering to a specified position in the air, an all-terrain vehicle carrying compound eye camera is used on the ground to fix the compound eye camera at the specified position, a tripod is used for carrying the compound eye camera indoors to the specified position, and compound eye camera spatial pose calibration, geographic position calibration and unified clock calibration are carried out;
(7) Shooting is carried out, each compound eye camera shoots at the same moment under a unified clock, at the moment, the data meet space-time consistency, shooting data are returned, a three-dimensional digital scene at the same moment is automatically reconstructed by a computer system, virtual compound eyes shoot at intervals of (1/frame rate) seconds according to the frame rate requirement of a dynamic scene, for example, the frame rate requirement is 25fps, the shooting interval is set to be 1/25 seconds, and dynamic shooting with real-time refreshing is realized.
In the above embodiment, the step of solving the pose of the camera back by the projection center of the camera in the step (5-2) is specifically as follows: taking the projection central point as a starting point to be a parallel line in the normal direction of a reconstruction plane where the projection central point is located, taking the starting point and a distance away from the object as a direction, and taking a point with the distance along the parallel line as a default object distance during shooting as the central position of the compound eye camera; detecting whether the compound eye camera is not collided with other objects and is in a limited space, if so, determining that the shooting position of the compound eye camera is the point, and the shooting angle is the included angle between the parallel line and 3 coordinate planes; if the central position is not found along the central position, the central position is moved up and down within a certain range until the central position which can pass the detection is found; at the moment, the position of the compound eye camera is the position of a central point which finally passes through detection, and the shooting angle is the included angle between a straight line which passes through the detected central point and the projection center and a coordinate plane.
In the above examples, the genetic algorithm described in step (5-4) was modeled by: the number of the compound eye cameras is taken as the number of the genes, whether each compound eye camera contains a shooting position point or not is taken as an indication, whether the genes are expressed or not is taken as an indication, the longest moving distance of the compound eye cameras in each genome is taken as a basis for judging whether the genome is good or bad, and the shorter the distance is, the better the genome is; the moving distance of each gene expression is obtained, the path is optimized through Dijkstra algorithm, and the shortest distance of the path passing through all the points is calculated.
Details not described in the present specification belong to the prior art known to those skilled in the art.
The above examples of the present invention are provided for illustrative clarity and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. It is not exhaustive here for all embodiments. All obvious changes and modifications of the present invention are within the scope of the present invention.

Claims (6)

1. A virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition is characterized in that: the system comprises a data acquisition module, a positioning module and a task allocation module;
the data acquisition module is used for acquiring pictures at a specific position and a specific angle, and transmitting the acquired data back in real time in a wireless mode for reconstructing the three-dimensional model; the data acquisition module is formed by cooperating all compound eye cameras facing to a building target body, and all compound eye cameras facing to the target body are planned according to a set building acquisition grid, and are virtually grouped into a complete and systematic compound eye system which is called as a virtual compound eye; the virtual compound eye is planned by a plurality of compound eye cameras according to a set building acquisition grid, and the compound eye cameras are mutually cooperated and jointly built, each compound eye camera is provided with a plurality of lenses, and an independent lens is called as a sub-eye; all the lenses carry out data acquisition according to the provision of a unified clock to obtain data with space-time consistency;
the positioning module comprises a GPS positioner and a UWB positioning system;
the GPS localizer is arranged in each compound eye camera and is used for receiving GPS locating signals and determining the global coordinates of each compound eye camera and a shooting area;
the UWB positioning system is used for accurately positioning a compound eye camera in a building acquisition grid; the UWB positioning system comprises a base station and tags, wherein the base station is arranged around a shooting target, and the tags are arranged in compound eye cameras; the UWB positioning system label transmits pulse according to a certain frequency, and each label has a unique ID;
the UWB positioning system base station is used for receiving UWB pulses transmitted by the tag and measuring the distance between the tag and the base station; when a plurality of base stations exist, the positions of the labels can be located through calculation;
the task allocation module calculates an occupation node of each compound eye camera in the established building acquisition grid plan according to the size and the shape of the reconstructed building entity determined by the established building acquisition grid plan, the limitation of a shooting space and the accuracy of a reconstruction model, determines a shooting position, a shooting posture and shooting parameters, determines sub-eyes of each compound eye camera for executing an acquisition task, and transmits the data to the compound eye cameras; the module sends a time calibration command, a pose calibration command and an occupation calibration command to the compound eye camera at intervals to perform clock calibration, pose calibration and occupation calibration.
2. The virtual compound eye camera system for building three-dimensional scene acquisition meeting the space-time consistency as claimed in claim 1, which is characterized in that: the compound eye camera is a device which is provided with a plurality of lenses, can simultaneously acquire images along a horizontal plane at 360 degrees and a vertical plane at 360 degrees, is used for acquiring picture data, receiving a shooting command of an upper computer and transmitting the shot picture data and the position and posture information of the compound eye camera back to the upper computer; each compound eye camera plans an occupied network according to a set building acquisition grid, operates cooperatively under the control of a unified clock and receives unified allocation.
3. The virtual compound eye camera system for building three-dimensional scene acquisition meeting the space-time consistency as claimed in claim 1, which is characterized in that: the task allocation module calculates the angle, the navigation angle and the horizontal angle of the camera holder to be adjusted; adjusting a camera holder to enable the compound eye camera to keep a shooting posture; adjusting each sub-eye shooting parameter in the compound eye camera, and controlling the compound eye camera to shoot; the compound eye camera is installed on the carrier through the camera holder.
4. A method of operating the virtual compound eye camera system for three-dimensional scene acquisition of buildings meeting space-time consistency as claimed in claim 1, characterized in that the method comprises the following steps:
(1) Installing a GPS (global positioning system) locator on the compound eye camera to determine the global coordinates of a building area; building a UWB positioning system, and selecting a proper position to install a UWB positioning system base station; three-dimensional positioning requires at least 6 base stations; installing a UWB tag on each compound eye camera;
(2) Pre-shooting, carrying a compound eye camera by an unmanned aerial vehicle or an airship outdoors to perform annular flying shooting around a building target, carrying a compound eye camera by a tripod or a handheld tripod head indoors to record an inner scene, controlling the compound eye camera to record a limited space boundary point, returning initial data to a task allocation module, and using pre-shooting data to construct a building acquisition grid plan;
(3) The pre-shot data is preprocessed through the task allocation module, the three-dimensional coordinates of the boundary points of the building are calculated through a binocular vision technology, the size, the shape and the internal frame of the entity of the reconstructed building are abstracted through corners, and the processing work flow is as follows:
(3-1) performing primary image processing on the pre-shot picture;
(3-2) extracting a starting point and an end point of an edge line segment in the picture;
(3-3) selecting the starting point and the end point pairs of the extracted edge line segments;
(3-4) matching the starting points of the edge line segments in the adjacent pictures, and finding out the matching areas of the starting points and the end points of the edge line segments;
(3-5) calculating three-dimensional coordinates of a starting point and a finishing point of the edge line segment through a binocular vision algorithm to form a line segment in a virtual three-dimensional space;
(3-6) removing the line segments which are not connected;
(3-7) selecting more than two connected line segments to connect the two connected line segments into a plane; if the selected line segment can not be closed, adding a new line segment to close the plane of the new line segment;
(3-8) repeating the step (3-7) to obtain a pre-reconstruction object consisting of the intersection line of the plane and the plane;
(4) According to the entity of the preprocessing reconstruction, a building acquisition grid is constructed through a task allocation module, and the construction method comprises the following steps:
(4-1) finding out all planes and facades in the pre-reconstruction entity, including an outer scene and an inner scene of the building;
(4-2) according to the requirement of the building scene acquisition resolution, starting from the edge of the plane closest to the ground, sequentially determining each projection plane according to the camera parameters, enabling the projection planes of the compound eye camera to uniformly cover the plane, and ensuring that the overlap rate between the projection planes is more than 50%;
(4-3) the projection surface of each sub-eye is a rectangular acquisition grid, all areas of the inside and outside scenes of the whole building are covered by the acquisition grids to form a building acquisition grid system, each grid corresponds to a sub-eye shooting area, and the content of the grid area is updated in real time;
(5) The task planning and optimization are carried out through the task allocation module, and the processing working flow is as follows:
(5-1) finding out all intersecting lines of the entity to be reconstructed, starting from a point closest to the ground, enabling the projection center point of the compound eye camera to be located on the intersecting lines, and simultaneously ensuring that the overlapping rate of projection surfaces is more than 50%;
(5-2) reversely solving the pose of the camera through the projection center of the compound eye camera;
(5-3) redundant shooting pose elimination: the camera pose calculated in the step (5-2) has the following two problems: in case 1, two or more cameras have similar shooting positions; case 2, two or more cameras, the shooting positions and the shooting angles are both close;
the accuracy of three-dimensional scene reconstruction is influenced by the fact that the space positions of adjacent cameras are too close, redundancy is generated in the number of the cameras, the two situations belong to redundant poses, and the method for eliminating the redundant poses is as follows:
case 1: abstracting the camera into one point, taking the point with close shooting position as a point set, keeping the distance and the minimum point from other points in the point set, deleting other points, and taking the shooting position of the reserved point as the shooting position of the camera;
case 2: taking a shooting point with a compound eye camera position close to a shooting angle as a point set, reserving a point with the minimum distance and the minimum distance from other points in the point set, deleting other points, and taking the shooting position and the shooting angle of the reserved point as the shooting position and the shooting angle of the compound eye camera, wherein the shooting angle does not need to be calculated again due to the close shooting angle;
(5-4) task allocation, namely, according to the number of compound eye cameras, ensuring the optimal moving distance of the compound eye camera with the longest moving distance through a genetic algorithm and a Dijkstra algorithm, and allocating a task to each compound eye camera;
(6) According to task allocation results, an unmanned aerial vehicle/airship carrying a compound eye camera is used for hovering to a specified position in the air, an all-terrain vehicle carrying the compound eye camera is used on the ground to be fixed at the specified position, a tripod is used for carrying the compound eye camera indoors to the specified position, and the compound eye camera is subjected to spatial pose calibration, geographic position calibration and unified clock calibration;
(7) Shooting is carried out, each compound eye camera shoots simultaneously according to a unified clock, the data meet the space-time consistency at the moment, shooting data are returned, a three-dimensional digital scene at the same moment is automatically reconstructed by a computer system, virtual compound eyes shoot every S seconds according to the dynamic scene frame rate requirement, and S =
Figure DEST_PATH_IMAGE002
And dynamic shooting with real-time refreshing is realized.
5. The working method of the virtual compound eye camera system for acquiring the three-dimensional scene of the building meeting the space-time consistency as claimed in claim 4, is characterized in that: the specific step of reversely solving the pose of the camera through the projection center of the camera in the step (5-2) is as follows: taking the projection central point as a starting point to be a parallel line in the normal direction of a reconstruction plane where the projection central point is located, taking the starting point and a distance away from the object as a direction, and taking a point with the distance along the parallel line as a default object distance during shooting as the central position of the compound eye camera; detecting whether the compound eye camera is not collided with other objects and is in a limited space, if so, considering the shooting position of the compound eye camera as the point, and the shooting angle is the included angle between the parallel line and 3 coordinate planes; if the central position is not found along the central position, the central position is moved up and down within a certain range until the central position which can pass the detection is found; at the moment, the position of the compound eye camera is the position of a central point which finally passes through detection, and the shooting angle is the included angle between a straight line which passes through the detected central point and the projection center and a coordinate plane.
6. The working method of the virtual compound eye camera system for acquiring the three-dimensional scene of the building meeting the space-time consistency as claimed in claim 4, is characterized in that: the genetic algorithm modeling method in the step (5-4) is as follows: the number of the compound eye cameras is taken as the number of the genes, whether each compound eye camera contains a shooting position point or not is taken as an indication, whether the genes are expressed or not is taken as an indication, the longest moving distance of the compound eye cameras in each genome is taken as a basis for judging whether the genome is good or bad, and the shorter the distance is, the better the genome is; the moving distance of each gene expression is obtained, the path is optimized through Dijkstra algorithm, and the shortest distance of the path passing through all the points is calculated.
CN201810865682.0A 2018-08-01 2018-08-01 Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof Active CN109118585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810865682.0A CN109118585B (en) 2018-08-01 2018-08-01 Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810865682.0A CN109118585B (en) 2018-08-01 2018-08-01 Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof

Publications (2)

Publication Number Publication Date
CN109118585A CN109118585A (en) 2019-01-01
CN109118585B true CN109118585B (en) 2023-02-10

Family

ID=64863911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810865682.0A Active CN109118585B (en) 2018-08-01 2018-08-01 Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof

Country Status (1)

Country Link
CN (1) CN109118585B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819453B (en) * 2019-03-05 2021-07-06 西安电子科技大学 Cost optimization unmanned aerial vehicle base station deployment method based on improved genetic algorithm
CN110675484A (en) * 2019-08-26 2020-01-10 武汉理工大学 Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera
CN110824461B (en) * 2019-11-18 2021-10-22 广东博智林机器人有限公司 Positioning method
CN110995967B (en) * 2019-11-22 2020-11-03 武汉理工大学 Virtual compound eye construction system based on variable flying saucer airship
CN111028274A (en) * 2019-11-28 2020-04-17 武汉理工大学 Smooth curved surface mesh traceless division-oriented projection marking system and working method thereof
CN111031259B (en) * 2019-12-17 2021-01-19 武汉理工大学 Inward type three-dimensional scene acquisition virtual compound eye camera
CN111192362B (en) * 2019-12-17 2023-04-11 武汉理工大学 Working method of virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene
CN111536913B (en) * 2020-04-24 2022-02-15 芜湖职业技术学院 House layout graph measuring device and measuring method thereof
WO2022000210A1 (en) * 2020-06-29 2022-01-06 深圳市大疆创新科技有限公司 Method and device for analyzing target object in site
CN117939086B (en) * 2024-03-19 2024-06-04 中通服建设有限公司 Intelligent monitoring platform and method for digital building

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226838A (en) * 2013-04-10 2013-07-31 福州林景行信息技术有限公司 Real-time spatial positioning method for mobile monitoring target in geographical scene
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN112070818A (en) * 2020-11-10 2020-12-11 纳博特南京科技有限公司 Robot disordered grabbing method and system based on machine vision and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226838A (en) * 2013-04-10 2013-07-31 福州林景行信息技术有限公司 Real-time spatial positioning method for mobile monitoring target in geographical scene
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN112070818A (en) * 2020-11-10 2020-12-11 纳博特南京科技有限公司 Robot disordered grabbing method and system based on machine vision and storage medium

Also Published As

Publication number Publication date
CN109118585A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN109118585B (en) Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof
US10853931B2 (en) System and method for structural inspection and construction estimation using an unmanned aerial vehicle
CN107514993A (en) The collecting method and system towards single building modeling based on unmanned plane
CN112470092B (en) Surveying and mapping system, surveying and mapping method, device, equipment and medium
Yang et al. A novel approach of efficient 3D reconstruction for real scene using unmanned aerial vehicle oblique photogrammetry with five cameras
JP2018165726A (en) Point group data utilization system
US11892845B2 (en) System and method for mission planning and flight automation for unmanned aircraft
CN107356230A (en) A kind of digital mapping method and system based on outdoor scene threedimensional model
JP2022554248A (en) Structural scanning using unmanned air vehicles
CN111192362B (en) Working method of virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene
CN111259097A (en) Refined waypoint checking method applied to unmanned aerial vehicle inspection in photovoltaic industry
KR20190051703A (en) Stereo drone and method and system for calculating earth volume in non-control points using the same
CN113066120B (en) Intelligent pole and tower inclination detection method based on machine vision
CN213302860U (en) Three-dimensional visual obstacle avoidance system of unmanned aerial vehicle
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
JP6080641B2 (en) 3D point cloud analysis method
Lauterbach et al. The Eins3D project—Instantaneous UAV-based 3D mapping for Search and Rescue applications
US20160371544A1 (en) Photovoltaic measurement system
CN109946564A (en) A kind of distribution network overhead line inspection data collection method and cruising inspection system
WO2023064041A1 (en) Automated aerial data capture for 3d modeling of unknown objects in unknown environments
CN112286228A (en) Unmanned aerial vehicle three-dimensional visual obstacle avoidance method and system
CN115046531A (en) Pole tower measuring method based on unmanned aerial vehicle, electronic platform and storage medium
CN114463489B (en) Oblique photography modeling system and method for optimizing unmanned aerial vehicle route
CN113920186B (en) Low-altitude unmanned-machine multi-source fusion positioning method
CN116753962B (en) Route planning method and device for bridge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant