CN111031259B - Inward type three-dimensional scene acquisition virtual compound eye camera - Google Patents

Inward type three-dimensional scene acquisition virtual compound eye camera Download PDF

Info

Publication number
CN111031259B
CN111031259B CN201911304369.0A CN201911304369A CN111031259B CN 111031259 B CN111031259 B CN 111031259B CN 201911304369 A CN201911304369 A CN 201911304369A CN 111031259 B CN111031259 B CN 111031259B
Authority
CN
China
Prior art keywords
acquisition
scene
determined
planning
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911304369.0A
Other languages
Chinese (zh)
Other versions
CN111031259A (en
Inventor
王汉熙
耿杰
易茂祥
黄鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201911304369.0A priority Critical patent/CN111031259B/en
Publication of CN111031259A publication Critical patent/CN111031259A/en
Application granted granted Critical
Publication of CN111031259B publication Critical patent/CN111031259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Abstract

The invention relates to the field of digital three-dimensional scene construction, and provides an inward three-dimensional scene acquisition virtual compound eye camera which comprises a planning module, a virtual compound eye image acquisition module and a virtual compound eye image acquisition module, wherein the planning module is used for realizing the layout planning of camera lenses based on an envelope sphere scheme; the acquisition module is formed by integrating a camera lens for acquiring images, a camera lens pose gyroscope, a GPS (global positioning system) positioner, a UWB (ultra-wideband) positioner and a wireless transmitter; the control module is composed of a network control server; the storage module consists of an image acquisition card and a data storage card; through the Internet of things, the functional modules form a dynamic scene acquisition system with layout cooperation, information cooperation, action cooperation, function cooperation and task cooperation. The collected scene information and the manufactured three-dimensional scene have space-time consistency, real-time dynamic property, perceptual pose and random integration; the acquired information is not influenced by the scene scale, and the dynamic reconstruction of the camera scale, the acquisition direction and the like can be realized by envelope sphere planning no matter how large the scene scale is.

Description

Inward type three-dimensional scene acquisition virtual compound eye camera
Technical Field
The invention relates to the field of digital three-dimensional scene construction, in particular to an inward three-dimensional scene acquisition virtual compound eye camera.
Background
The dynamic digital three-dimensional scene is constructed, the essence of which is that three-dimensional and visual scenes which meet the requirements of 'four shapes' such as 'space-time consistency, real-time dynamics, position and pose sensibility, random integration and the like, have one-to-one correspondence of' object-position-time 'and the like and are constructed according to' four-property 'requirements such as' space-time consistency, real-time dynamics, position and pose sensibility, random integration and the like, and can be immersed in and integrated with VR/AR/MR/XR scenes.
1. The method comprises the steps of constructing a dynamic digital three-dimensional scene with tetralogy and tetragon, wherein the reference premise is that synchronous acquisition of all-dimensional all-point acquisition is implemented by taking a section at the same moment as a reference under the control of a unified clock.
2. Currently, for the acquisition of live-action data, a single-aperture imaging system is generally adopted. The technical scheme and the system have the difficulty of obtaining the panoramic view within the range of 180 degrees, 360 degrees and even 720 degrees according to the synchronization rule.
3. If the 'huge point' type collected object is divided into two forms of a long strip-shaped object, a round object and the like, a single-aperture imaging system carries out 180-degree or even 360-degree full-view field collection on the collected object, and the scheme generally comprises two steps:
(1) scheme of the long-strip-shaped object scene: for a small-sized strip-shaped object, a shooting point needs to be determined, and all scene materials of the object are obtained by continuously rotating the angle of a camera; for a medium-sized long-strip object, a tripod is required to be moved to search a plurality of acquisition points to acquire scene materials; for large, long objects, it is even necessary to lay a track for harvesting.
(2) Round objects participate in the scheme: for small round objects, a photographer needs to hold a camera for surrounding acquisition; for a medium-sized circular object, moving a tripod and determining a collection point for collection; for large circular objects, the acquisition requirement can be met only by erecting an annular track.
The scheme collects real-scene image materials, and has the following 4 problems:
1. whether a long strip object or a round object does not move the camera position or rotate the camera angle, the imaging view of 360 degrees in the horizontal direction + 360 degrees in the vertical direction cannot be satisfied.
2. The camera is rotated by an angle, the tripod is moved, the camera moves in a track and the like, a certain time interval exists in the collection of each photo, and the shot photos do not meet the requirement of space-time consistency.
3. No matter the strip-shaped object or the circular object, a static scene with 360 degrees in the horizontal direction + 360 degrees in the vertical direction can be constructed only by splicing, but a dynamic scene cannot be constructed.
4. When the scene is large, the acquisition point required by the above scheme is difficult to accurately determine, and even when the acquisition point can be determined, the photographer and the lens cannot reach the acquisition point.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide an inward three-dimensional scene acquisition virtual compound eye camera, wherein the acquired scene information and the manufactured three-dimensional scene have four characteristics of space-time consistency, real-time dynamics, perceptual pose sensibility, random integration and the like; the collected scene material information contains kernel information for deducing and marking four shapes such as 'position of heaven and earth, landform, spatial distribution, geometric dimension' and the like; the acquisition information is not influenced by the scene scale, and the dynamic reconstruction of the camera scale, the acquisition direction and the like can be realized by envelope sphere planning no matter how large the scene scale is, and the covering acquisition of the 'huge point' scene in the horizontal direction of 360 degrees + in the vertical direction of 360 degrees is always kept.
The object of the invention is achieved by the following technical measures.
The concept of the inward virtual compound eye camera is that when a surface three-dimensional scene is collected for a 'huge point' (for example, a mountain, a large building, and the like), because a single lens is difficult to realize the comprehensive coverage of the 'huge point', a plurality of camera lenses which are originally not associated with entities such as structures, mechanisms, machinery and the like are used for implementing three-dimensional envelope on the 'huge point' according to an envelope sphere (including a deformation form thereof) which is constructed horizontally 360 degrees and vertically 360 degrees; the camera lens arranged on the enveloping sphere node cooperatively collects 'huge point' scenes under the support of the Internet of things, and the photos realize the comprehensive redundant coverage on the 'huge point' surface.
Supposing that a camera lens is arranged on each envelope node on the envelope sphere as an acquisition module, and is aligned to the center of the 'huge point' to acquire images; the planning module inspects the attitude information of each camera lens according to scene change and acquisition requirements, and plans the number and positions of the camera lenses participating in acquisition and layout; according to the collection layout, each camera lens is aligned to a specific local area of the surface of the 'huge point', and the pictures of all the cameras realize continuous and redundant coverage on the surface of the 'huge point'; the control module sends acquisition parameters (acquisition direction, shooting focal length, acquisition depth of field, field angle, acquisition operation time and the like) and acquisition instructions (acquisition time, marking stamp and recovery database position pointer) to the planned and participated photographic lens according to the acquisition plan under the unified clock; the related photographic lens participating in the operation implements dynamic acquisition operation according to the instruction of the control module and the unified clock, and uploads the acquired photo information to the storage module through the communication module; the internet of things is a communication center of a communication planning module, an acquisition module, a control module, a communication module, a storage module and a unified clock.
The so-called inward three-dimensional scene acquisition virtual compound eye camera is characterized in that a camera lens which is originally not associated with entities such as structures, mechanisms, machinery and the like is arranged at the position of a node of an envelope sphere according to a 'huge point' envelope sphere construction rule; through the Internet of things, a photographic lens (acquisition module), a background planning module, a control module, a communication module, a storage module and a unified clock are jointly used for constructing a virtual compound eye camera for acquiring 'huge point' surface full scene information; the method realizes the consistent synchronous real-time acquisition of 'huge point' surface full-scene pictures/photographic information according to the time nodes provided by the unified clock.
The invention provides an inward three-dimensional scene acquisition virtual compound eye camera which is formed by the cooperation of the following mutually independent functional modules through the Internet of things.
The planning module is used for realizing the layout planning of the camera lens based on the envelope sphere scheme;
the acquisition module is formed by integrating a camera lens for acquiring images, a camera lens pose gyroscope, a GPS (global positioning system) positioner, a UWB (ultra-wideband) positioner and a wireless transmitter; the camera lens adjusts parameters according to the acquisition instruction, performs acquisition work, and transmits the image to the storage module by using the wireless transmitter; the pose gyroscope is used for sensing the pose of the lens relative to the huge points; the GPS positioner is used for receiving the GPS positioning signal and determining the global geographic coordinate of the lens; the UWB positioner is used for determining the accurate relative positioning between the camera lenses; the position relation of each camera lens can be transmitted to the planning module through the communication module;
the control module is composed of a control panel integrated with a Web server and a network control server based on a socket and wirelessly connected to a wireless network, and realizes the cooperative control of multiple lenses in the acquisition module;
the communication module is composed of a wireless transmitter based on a wireless network and used for transmitting acquisition instructions, the coordinate position of a camera, shooting parameters, image acquisition time and image information;
the storage module consists of an image acquisition card and a data storage card and is used for storing images, camera lens positions and shooting parameters; the storage module adds a series of marks such as a lens stamp, a time stamp, a pose stamp and a position stamp to the stored data to ensure that the data are not mixed;
and forming a dynamic scene acquisition system with layout cooperation, information cooperation, action cooperation, function cooperation and task cooperation by the functional modules through the Internet of things.
In the technical scheme, the planning module realizes the layout planning of the camera lens based on the envelope sphere scheme, and comprises two processes of pre-planning and dynamic planning;
(1) pre-planning, namely intercepting the acquisition range of the determined 'huge points' on the map, and the map scale, collecting the height information of the 'huge points' in the acquisition area, and determining the range information of the 'huge points' acquisition scene, including length, width and height data; selecting the maximum value from the 3 data ranges, constructing a cube, and completely covering planned 'huge point' information in the cube; starting from the center of the cube, drawing a line segment related to the shooting distance and the scene size outwards, drawing a horizontal circular plane and a vertical circular plane by taking the line segment as a radius, and determining the position of a junction of an envelope sphere by 3 schemes of a rotation method, a movement method and a mixing method; after the position of the enveloped sphere node is determined, determining the coverage range and the shooting parameters of each camera lens according to the principle of surface continuity coverage and redundancy coverage; when the shooting height is not suitable, continuous adjustment and optimization are provided until an optimal shooting height is determined, and the final node scale, the relative position and the shooting parameters are determined; the coordinates of the nodes of the envelope sphere are corresponding to an actual scene through a coordinate system conversion relation and are directly output to a control module;
(2) and dynamic planning, namely acquiring images according to a pre-planning result, and dynamically planning in real time according to scene change information transmitted by the communication module and shot camera lens parameter information in the acquisition process.
The inward three-dimensional scene acquisition virtual compound eye camera determines the number and the position of camera lenses, namely miniature digital cameras based on shooting scene squareness and shooting height, and sends an acquisition shooting instruction under the control of a unified clock by a control module, and the acquisition module receives the instruction to perform acquisition work and transmits acquisition data in real time. The control module is connected with the wireless transmitting/receiving module of each miniature digital camera, data collected by each miniature digital camera are subjected to unified data coding and database storage, and shooting operation and data external transmission are performed in a wireless mode. The position of the inward three-dimensional scene acquisition virtual compound eye camera changes along with the change of the size and the shooting height of the scene, but scene information can be coated all the time.
Compared with the prior art, the invention has the following advantages:
1. the collected scene information and the manufactured three-dimensional scene have four characteristics of space-time consistency, real-time dynamic property, position and pose sensibility, random integration and the like.
2. The collected scene material information contains kernel information for deducing and marking ' four shapes ' such as ' position of heaven and earth, landform, spatial distribution, geometric dimension ' and the like '.
3. The acquisition information is not influenced by the scene scale, and the dynamic reconstruction of the camera scale, the acquisition direction and the like can be realized by envelope sphere planning no matter how large the scene scale is, and the covering acquisition of the 'huge point' scene in the horizontal direction of 360 degrees + in the vertical direction of 360 degrees is always kept.
Drawings
FIG. 1 is a schematic view of an inward virtual compound eye photography determined by the rotation method of the present invention.
Fig. 2 is a schematic view of inward virtual compound eye photography determined by the movement method in the invention.
Fig. 3 is a schematic view of an inward virtual compound eye photograph determined by the mixing method of the present invention.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings.
The embodiment provides an inward three-dimensional scene acquisition virtual compound eye camera which is formed by the cooperation of the following mutually independent functional modules through the internet of things.
The planning module is used for realizing the layout planning of the camera lens based on the envelope sphere scheme;
the acquisition module is formed by integrating a camera lens for acquiring images, a camera lens pose gyroscope, a GPS (global positioning system) positioner, a UWB (ultra-wideband) positioner and a wireless transmitter; the camera lens adjusts parameters according to the acquisition instruction, performs acquisition work, and transmits the image to the storage module by using the wireless transmitter; the pose gyroscope is used for sensing the pose of the lens relative to the huge points; the GPS positioner is used for receiving the GPS positioning signal and determining the global geographic coordinate of the lens; the UWB positioner is used for determining the accurate relative positioning between the camera lenses; the position relation of each camera lens can be transmitted to the planning module through the communication module;
the control module is composed of a control panel integrated with a Web server and a network control server based on a socket and wirelessly connected to a wireless network, and realizes the cooperative control of multiple lenses in the acquisition module;
the communication module is composed of a wireless transmitter based on a wireless network and used for transmitting acquisition instructions, the coordinate position of a camera, shooting parameters, image acquisition time and image information;
the storage module consists of an image acquisition card and a data storage card and is used for storing images, camera lens positions and shooting parameters; the storage module adds a series of marks such as a lens stamp, a time stamp, a pose stamp and a position stamp to the stored data to ensure that the data are not mixed;
and forming a dynamic scene acquisition system with layout cooperation, information cooperation, action cooperation, function cooperation and task cooperation by the functional modules through the Internet of things.
Scheme 1, the node positions of the envelope sphere are determined by using a rotation method, as shown in fig. 1.
(1) If the radius of the horizontal circle and the radius of the vertical circle are determined, drawing the horizontal circle and the vertical circle, rotating the horizontal circle around the horizontal axis by 45 degrees, rotating the vertical circle around the vertical axis by 45 degrees, wherein the intersection point formed by all the circle planes is the node position of the envelope sphere, the node position is an initial position determined according to the shooting distance and the scene size, and after the focal distance is determined, the plane shot by each camera lens can be determined.
(2) And continuously adjusting the shooting distance and the focal distance according to the principle of continuous and redundant coverage of the surface to finally determine a group of optimal solutions about the shooting distance and the focal distance, so that planes shot by all the lenses can cover the whole macro point, and the node positions determined by the optimal solutions can be determined according to the scene size and the shooting distance.
(3) After the node position is determined, the node position obtained by planning can be converted into the coordinate position of the camera lens through the coordinate conversion relation and then input into the control module.
(4) When the work is started, the lenses of all the acquisition modules reach the designated positions to perform acquisition work under the unified clock according to the input instruction of the control module, acquire scene materials, transmit the scene materials and transmit attitude information to the control module, so that a dynamic acquisition process is formed.
Scheme 2, the node positions of the envelope sphere are determined by using a moving method, as shown in fig. 2.
(1) After the radiuses of the horizontal circle and the vertical circle are determined, drawing a horizontal circle plane and a vertical circle plane, moving the horizontal circle forwards and backwards in parallel for a certain distance, drawing a horizontal circle segment, moving the vertical circle upwards and downwards in parallel for a certain distance, drawing a vertical circle segment, wherein the intersection point of the horizontal circle plane and the vertical circle segment is the node position of the envelope sphere, the node position is an initial position determined according to the shooting distance and the scene size, and after the focal distance is determined, the plane shot by each camera lens can be determined.
(2) And continuously adjusting the shooting distance and the focal distance according to the principle of continuous and redundant coverage of the surface to finally determine a group of optimal solutions about the shooting distance and the focal distance, so that planes shot by all the lenses can cover the whole macro point, and the node positions determined by the optimal solutions can be determined according to the scene size and the shooting distance.
(3) After the node position is determined, the node position obtained by planning can be converted into the coordinate position of the camera lens through the coordinate conversion relation and then input into the control module.
(4) When the work is started, the lenses of all the acquisition modules reach the designated positions to perform acquisition work under the unified clock according to the input instruction of the control module, acquire scene materials, transmit the scene materials and transmit attitude information to the control module, so that a dynamic acquisition process is formed.
Scheme 3, the positions of the nodes of the envelope sphere are determined by a hybrid method, as shown in fig. 3.
(1) After the radiuses of the horizontal circle and the vertical circle are determined, drawing a horizontal circle plane and a vertical circle plane, rotating the horizontal circle around a horizontal axis clockwise continuously by 45 degrees, moving the vertical circle forwards and backwards in parallel by a certain distance, drawing a vertical section circle, wherein the intersection point of the horizontal circle and the vertical circle is the node position of the envelope sphere, the node position is an initial position determined according to the shooting distance and the scene size, and after the focal distance is determined, the plane shot by each camera lens can be determined.
(2) And continuously adjusting the shooting distance and the focal distance according to the principle of continuous and redundant coverage of the surface to finally determine a group of optimal solutions about the shooting distance and the focal distance, so that planes shot by all the lenses can cover the whole macro point, and the node positions determined by the optimal solutions can be determined according to the scene size and the shooting distance.
(3) After the node position is determined, the node position obtained by planning can be converted into the coordinate position of the camera lens through the coordinate conversion relation and then input into the control module.
(4) When the work is started, the lenses of all the acquisition modules reach the designated positions to perform acquisition work under the unified clock according to the input instruction of the control module, acquire scene materials, transmit the scene materials and transmit attitude information to the control module, so that a dynamic acquisition process is formed.
Details not described in the present specification belong to the prior art known to those skilled in the art.
It will be understood by those skilled in the art that the foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the invention, such that any modification, equivalent replacement or improvement made within the spirit and principle of the present invention shall be included within the scope of the present invention.

Claims (4)

1. An inward three-dimensional scene acquisition virtual compound eye camera is characterized in that: the system is formed by the following mutually independent functional modules in a cooperative mode through the Internet of things;
the planning module is used for realizing the layout planning of the camera lens based on the envelope sphere scheme; the method specifically comprises two processes of pre-planning and dynamic planning;
(1) pre-planning, namely intercepting the acquisition range of the determined 'huge points' on the map, and the map scale, collecting the height information of the 'huge points' in the acquisition area, and determining the range information of the 'huge points' acquisition scene, including length, width and height data; selecting the maximum value from the 3 data ranges, constructing a cube, and completely covering planned 'huge point' information in the cube; starting from the center of the cube, drawing a line segment related to the shooting distance and the scene size outwards, drawing a horizontal circular plane and a vertical circular plane by taking the line segment as a radius, and determining the position of a junction of an envelope sphere by 3 schemes of a rotation method, a movement method and a mixing method; after the position of the enveloped sphere node is determined, determining the coverage range and the shooting parameters of each camera lens according to the principle of surface continuity coverage and redundancy coverage; when the shooting height is not suitable, continuous adjustment and optimization are provided until an optimal shooting height is determined, and the final node scale, the relative position and the shooting parameters are determined; the coordinates of the nodes of the envelope sphere are corresponding to an actual scene through a coordinate system conversion relation and are directly output to a control module;
(2) dynamic planning, namely acquiring images according to a pre-planning result, and dynamically planning in real time according to scene change information transmitted by a communication module and shot camera lens parameter information in the acquisition process;
the acquisition module is formed by integrating a camera lens for acquiring images, a camera lens pose gyroscope, a GPS (global positioning system) positioner, a UWB (ultra-wideband) positioner and a wireless transmitter; the camera lens adjusts parameters according to the acquisition instruction, performs acquisition work, and transmits the image to the storage module by using the wireless transmitter; the pose gyroscope is used for sensing the pose of the lens relative to the huge points; the GPS positioner is used for receiving the GPS positioning signal and determining the global geographic coordinate of the lens; the UWB positioner is used for determining the accurate relative positioning between the camera lenses; the position relation of each camera lens can be transmitted to the planning module through the communication module;
the control module is composed of a control panel integrated with a Web server and a network control server based on a socket and wirelessly connected to a wireless network, and realizes the cooperative control of multiple lenses in the acquisition module;
the communication module is composed of a wireless transmitter based on a wireless network and used for transmitting acquisition instructions, the coordinate position of a camera, shooting parameters, image acquisition time and image information;
the storage module consists of an image acquisition card and a data storage card and is used for storing images, camera lens positions and shooting parameters; the storage module adds a series of marks such as a lens stamp, a time stamp, a pose stamp and a position stamp to the stored data to ensure that the data are not mixed;
and forming a dynamic scene acquisition system with layout cooperation, information cooperation, action cooperation, function cooperation and task cooperation by the functional modules through the Internet of things.
2. The inward type three-dimensional scene capturing virtual compound eye camera as claimed in claim 1, wherein the node position of the envelope sphere is determined by using a rotation method:
(1) if the radius of the horizontal circle and the radius of the vertical circle are determined, drawing the horizontal circle and the vertical circle, rotating the horizontal circle around a horizontal axis by 45 degrees, rotating the vertical circle around a vertical axis by 45 degrees, wherein the intersection point formed by all the circle planes is the node position of the envelope sphere, the node position is an initial position determined according to the shooting distance and the scene size, and after the focal distance is determined, the plane shot by each camera lens can be determined;
(2) continuously adjusting the shooting distance and the focal distance according to the principle of covering continuity and redundancy of the surface to finally determine a group of optimal solutions about the shooting distance and the focal distance, so that planes shot by all the lenses can cover the whole macro point, and the node positions determined by the optimal solutions can be determined according to the scene size and the shooting distance;
(3) after the node position is determined, converting the node position obtained by planning into a coordinate position of a camera lens through a coordinate conversion relation and inputting the coordinate position into a control module;
(4) when the work is started, the lenses of all the acquisition modules reach the designated positions to perform acquisition work under the unified clock according to the input instruction of the control module, acquire scene materials, transmit the scene materials and transmit attitude information to the control module, so that a dynamic acquisition process is formed.
3. The inward-facing three-dimensional scene capturing virtual compound eye camera as claimed in claim 1, wherein the node positions of the envelope sphere are determined by using a moving method:
(1) after the radiuses of the horizontal circle and the vertical circle are determined, drawing a horizontal circle plane and a vertical circle plane, moving the horizontal circle forwards and backwards in parallel for a certain distance, drawing a horizontal circle segment, moving the vertical circle upwards and downwards in parallel for a certain distance, drawing a vertical circle segment, wherein the intersection point of the horizontal circle plane and the vertical circle segment is the node position of the envelope sphere, the node position is an initial position determined according to the shooting distance and the size of a scene, and after the focal distance is determined, the plane shot by each camera lens can be determined;
(2) continuously adjusting the shooting distance and the focal distance according to the principle of covering continuity and redundancy of the surface to finally determine a group of optimal solutions about the shooting distance and the focal distance, so that planes shot by all the lenses can cover the whole macro point, and the node positions determined by the optimal solutions can be determined according to the scene size and the shooting distance;
(3) after the node position is determined, converting the node position obtained by planning into a coordinate position of a camera lens through a coordinate conversion relation and inputting the coordinate position into a control module;
(4) when the work is started, the lenses of all the acquisition modules reach the designated positions to perform acquisition work under the unified clock according to the input instruction of the control module, acquire scene materials, transmit the scene materials and transmit attitude information to the control module, so that a dynamic acquisition process is formed.
4. The inward-facing three-dimensional scene capturing virtual compound eye camera as claimed in claim 1, wherein the positions of the nodes of the envelope sphere are determined by a hybrid method:
(1) after the radiuses of the horizontal circle and the vertical circle are determined, drawing a horizontal circle plane and a vertical circle plane, rotating the horizontal circle around a horizontal axis clockwise continuously for 45 degrees, moving the vertical circle forwards and backwards in parallel for a certain distance, drawing a vertical section circle, wherein the intersection point of the horizontal circle and the vertical circle is the node position of an envelope sphere, the node position is an initial position determined according to the shooting distance and the scene size, and after the focal distance is determined, the plane shot by each camera lens can be determined;
(2) continuously adjusting the shooting distance and the focal distance according to the principle of covering continuity and redundancy of the surface to finally determine a group of optimal solutions about the shooting distance and the focal distance, so that planes shot by all the lenses can cover the whole macro point, and the node positions determined by the optimal solutions can be determined according to the scene size and the shooting distance;
(3) after the node position is determined, converting the node position obtained by planning into a coordinate position of a camera lens through a coordinate conversion relation and inputting the coordinate position into a control module;
(4) when the work is started, the lenses of all the acquisition modules reach the designated positions to perform acquisition work under the unified clock according to the input instruction of the control module, acquire scene materials, transmit the scene materials and transmit attitude information to the control module, so that a dynamic acquisition process is formed.
CN201911304369.0A 2019-12-17 2019-12-17 Inward type three-dimensional scene acquisition virtual compound eye camera Active CN111031259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911304369.0A CN111031259B (en) 2019-12-17 2019-12-17 Inward type three-dimensional scene acquisition virtual compound eye camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911304369.0A CN111031259B (en) 2019-12-17 2019-12-17 Inward type three-dimensional scene acquisition virtual compound eye camera

Publications (2)

Publication Number Publication Date
CN111031259A CN111031259A (en) 2020-04-17
CN111031259B true CN111031259B (en) 2021-01-19

Family

ID=70210284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911304369.0A Active CN111031259B (en) 2019-12-17 2019-12-17 Inward type three-dimensional scene acquisition virtual compound eye camera

Country Status (1)

Country Link
CN (1) CN111031259B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586304B (en) * 2020-05-25 2021-09-14 重庆忽米网络科技有限公司 Panoramic camera system and method based on 5G and VR technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205787581U (en) * 2016-05-27 2016-12-07 武汉理工大学 A kind of binocular camera towards export-oriented three-dimensional static numeral scenario building
CN109118585A (en) * 2018-08-01 2019-01-01 武汉理工大学 A kind of virtual compound eye camera system and its working method of the building three-dimensional scenic acquisition meeting space-time consistency
CN110381306A (en) * 2019-07-23 2019-10-25 深圳移动互联研究院有限公司 A kind of spherical shape three-dimensional panorama imaging system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101713772B1 (en) * 2012-02-06 2017-03-09 한국전자통신연구원 Apparatus and method for pre-visualization image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205787581U (en) * 2016-05-27 2016-12-07 武汉理工大学 A kind of binocular camera towards export-oriented three-dimensional static numeral scenario building
CN109118585A (en) * 2018-08-01 2019-01-01 武汉理工大学 A kind of virtual compound eye camera system and its working method of the building three-dimensional scenic acquisition meeting space-time consistency
CN110381306A (en) * 2019-07-23 2019-10-25 深圳移动互联研究院有限公司 A kind of spherical shape three-dimensional panorama imaging system

Also Published As

Publication number Publication date
CN111031259A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN110310248B (en) A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system
CN103874193B (en) A kind of method and system of mobile terminal location
CN102072725B (en) Spatial three-dimension (3D) measurement method based on laser point cloud and digital measurable images
CN107504957A (en) The method that three-dimensional terrain model structure is quickly carried out using unmanned plane multi-visual angle filming
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
CN104330074A (en) Intelligent surveying and mapping platform and realizing method thereof
Yang et al. A novel approach of efficient 3D reconstruction for real scene using unmanned aerial vehicle oblique photogrammetry with five cameras
Brutto et al. UAV systems for photogrammetric data acquisition of archaeological sites
CN112113542A (en) Method for checking and accepting land special data for aerial photography construction of unmanned aerial vehicle
CN111192362B (en) Working method of virtual compound eye system for real-time acquisition of dynamic three-dimensional geographic scene
CN109903382B (en) Point cloud data fusion method and device
CN106525007B (en) Distribution interactive surveys and draws all-purpose robot
CN113282108A (en) Method for rapidly and accurately acquiring low-altitude remote sensing image based on unmanned aerial vehicle technology
CN102831816B (en) Device for providing real-time scene graph
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
JP4418857B1 (en) Image acquisition system for generating 3D video of routes
CN111031259B (en) Inward type three-dimensional scene acquisition virtual compound eye camera
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
Koeva 3D modelling and interactive web-based visualization of cultural heritage objects
Chen et al. Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles
Singh et al. Application of UAV swarm semi-autonomous system for the linear photogrammetric survey
CN108364340A (en) The method and system of synchronous spacescan
CN108195359A (en) The acquisition method and system of spatial data
KR20210037998A (en) Method of providing drone route
CN100458560C (en) Methd for generating spherical panorama based on full frame image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant