CN110969696A - Method and system for three-dimensional modeling rapid space reconstruction - Google Patents

Method and system for three-dimensional modeling rapid space reconstruction Download PDF

Info

Publication number
CN110969696A
CN110969696A CN201911318191.5A CN201911318191A CN110969696A CN 110969696 A CN110969696 A CN 110969696A CN 201911318191 A CN201911318191 A CN 201911318191A CN 110969696 A CN110969696 A CN 110969696A
Authority
CN
China
Prior art keywords
fisheye
dimensional modeling
dimensional
spatial reconstruction
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911318191.5A
Other languages
Chinese (zh)
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Siwei Shidai Intelligent Technology Co ltd
China Germany Artificial Intelligence Institute Co ltd
Original Assignee
Wuhu Siwei Shidai Intelligent Technology Co ltd
China Germany Artificial Intelligence Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Siwei Shidai Intelligent Technology Co ltd, China Germany Artificial Intelligence Institute Co ltd filed Critical Wuhu Siwei Shidai Intelligent Technology Co ltd
Priority to CN201911318191.5A priority Critical patent/CN110969696A/en
Publication of CN110969696A publication Critical patent/CN110969696A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

The invention relates to a method and a system for three-dimensional modeling rapid space reconstruction, which are characterized by comprising the following steps: s1, acquiring a dome screen image of the scene by using the plurality of fisheye lenses to obtain a plurality of fisheye diagrams of the scene; s2, distortion of the multiple fish-eye patterns is removed, and the fish-eye patterns are expanded into a panoramic view; s3, splicing the multiple expanded fish eye diagrams to obtain a 360-degree seamless panoramic diagram, and generating a three-dimensional model through deep learning; the system comprises a spherical screen camera consisting of a plurality of fisheye lenses, a control module and a cloud end, and the model with high photo-level reality sense is directly reconstructed from the image by seamlessly splicing a plurality of photos which are simultaneously taken at the same reference point, so that the cost is low, the efficiency is high and the labor intensity is low.

Description

Method and system for three-dimensional modeling rapid space reconstruction
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to a method and a system for rapid spatial reconstruction of three-dimensional modeling.
Background
Currently, there are two main ways for modeling indoor scenes, one is to construct a three-dimensional model by using three-dimensional software, such as CAD, 3ds Max, etc., and to construct a three-dimensional model by a series of operations using some basic geometric elements. This approach typically employs geometric interactive modeling with the aid of data provided by indoor construction drawings, CAD, etc., and height assistance data to reconstruct indoor models. The modeling method is low in cost, mature in development and wide in application, but due to different indoor scenes and shape structures, the indoor reconstruction achieved through an interactive mode is time-consuming, and the requirement on professional skills of operators is high. Another way is to acquire three-dimensional data by means of instruments, which can be classified into optical measurement, ultrasonic measurement, electromagnetic measurement, etc. The optical measurement is the most applied method, for example, a laser scanner is used to directly acquire three-dimensional space coordinates and color information of each sampling point of an indoor scene, and this method has a lot of research results, but because the data acquisition cost is high and the use conditions are harsh, this method cannot be used and accepted by the public quickly.
Disclosure of Invention
Aiming at the defects and shortcomings of the prior art, the invention provides a method and a system for three-dimensional modeling rapid space reconstruction, which can rapidly complete three-dimensional space reconstruction.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a method for three-dimensional modeling rapid spatial reconstruction is characterized by comprising the following steps:
s1, acquiring a dome screen image of the scene by using the plurality of fisheye lenses to obtain a plurality of fisheye diagrams of the scene;
s2, distortion of the multiple fish-eye patterns is removed, and the fish-eye patterns are expanded into a panoramic view;
s3, utilizing deep learning to realize optical flow estimation, splicing a plurality of fisheye expansion images to obtain a 360-degree seamless panoramic image, and generating a three-dimensional space model;
further, in step S3, the optical flow estimation is realized by using the deep learning End-to-End network model flownet2.0, and a plurality of fisheye expansion maps are spliced to obtain a 360-degree seamless panoramic view, that is, a three-dimensional space model is generated.
Further, in the step S2, the distortion removal of the multiple fisheye diagram is realized by using an image distortion correction algorithm, and the MATLAB calibration is performed first, and then OPENCV processing is performed.
Furthermore, in the step S3, multiple unfolded fisheye images are seamlessly spliced by using multiple pictures at different angles in the same shooting space.
A system for three-dimensional modeling with fast spatial reconstruction, comprising:
the spherical screen camera is composed of a plurality of fisheye lenses and is used for acquiring a plurality of fisheye diagrams of a shooting scene;
the control module is respectively connected with the fisheye lenses, is used for receiving fisheye patterns shot by the fisheye lenses, transmitting fisheye pattern data to a cloud end, and is also used for respectively controlling image acquisition, camera setting and camera management of the fisheye lenses;
and the cloud end is used for receiving the fisheye pattern information acquired by the plurality of fisheye lenses, performing three-dimensional modeling on the acquired fisheye pattern and generating a three-dimensional space model.
6. The system for three-dimensional modeling rapid spatial reconstruction of claim 5, wherein: the control module comprises a processor, the processor is respectively connected with the controllers of the plurality of fisheye lenses, and the processor is further connected with a wireless communication module and is in communication connection with the cloud server through the wireless communication module.
Furthermore, the system also comprises a human-computer interaction module connected with the processor, and the human-computer interaction module is used for inputting control information, operating a control page and displaying operation information.
Further, the system further comprises a display system, and the display system is used for displaying the space three-dimensional model generated by the cloud.
Further, an End-to-End network model FlowNet2.0 system for deep learning is integrated in the cloud, and the collected fish eye images are subjected to optical flow estimation through the End-to-End network model FlowNet2.0 system to complete image splicing, so that a 360-degree seamless panoramic image is obtained, and a three-dimensional space model is generated.
Furthermore, the cloud is further used for carrying out distortion removal on the received multiple fisheye images, unfolding the fisheye images into panoramas, and splicing the panoramas to form a 360-degree seamless panoramas and generate a three-dimensional model.
The technical scheme has the following beneficial effects: the method provides a powerful tool for analyzing and extracting geometric information and manufacturing the model, and can reflect geometric details of the surfaces of buildings and objects and have vivid texture information after processing images, especially ground close-range images.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the system of the present invention.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
As shown in fig. 1, a method for three-dimensional modeling fast spatial reconstruction includes the steps of:
s1, acquiring a dome screen image of the scene by using the plurality of fisheye lenses to obtain a plurality of fisheye diagrams of the scene; s2, carrying out distortion removal on the multiple fisheye images, and unfolding the fisheye images into a panoramic image, wherein the distortion removal on the multiple fisheye images is realized by adopting an image distortion correction algorithm, MATLAB calibration is carried out firstly, and OPENCV processing is carried out again, specifically, simple radial distortion and tangential distortion calibration is carried out on images shot by a video camera through MATLAB, and three radial distortion parameters k1, k2 and k3 and two tangential distortion parameters p1 and p2 of the video camera are obtained. These five parameters are used for openCV image correction, openCV takes into account radial and tangential factors for distortion.
Wherein the radial distortion model is: derived from Taylor's equation, in opencv K is 1, r2=x2+y2(x, y) is the true coordinate (after distortion), (x ', y') is the ideal coordinate, and r is the distance of the point from the imaging center.
δxr=x(k1r2+k2r4+k3r6+K)
δyr=y(k1r2+k2r4+k3r6+K)
The tangential distortion model is:
δxd=2p1xy+p2(r2+2x2)+K
δyd=2p1(r2+2y2)+2p2xy+K
ideal coordinates (x ', y') and real coordinates (x, y):
x’=x+δxrxd
y’=y+δyryd
the following can be obtained:
Figure BDA0002326424730000051
the distortion obtained by the formula has five parameters of k1, k2, k3, p1 and p2, the tangential distortion is small and negligible for a camera with better quality, the radial distortion coefficient k3 can also be ignored, and only two parameters of k1 and k2 are calculated.
S3, optical flow estimation is achieved through a deep learning End-to-End network model FlowNet2.0, a plurality of expanded fisheye diagrams are spliced, and a plurality of pictures in the same shooting space and at different angles are seamlessly spliced when the plurality of expanded fisheye diagrams are spliced, so that a 360-degree seamless panoramic image is obtained, and a three-dimensional model is generated.
As shown in fig. 2, a system for three-dimensional modeling fast spatial reconstruction includes:
the system comprises a first fisheye lens, a second fisheye lens, a third fisheye lens, a fourth fisheye lens and a fourth fisheye lens, wherein N is a dome camera formed by at least two fisheye lenses and used for acquiring a plurality of fisheye diagrams of a shooting scene; specifically, two or more fisheye lenses are arranged on the same support, and if two fisheye lenses are arranged on the same support, the two fisheye lenses can be bound on the support back to form the dome camera.
The control module is used for receiving images shot by the plurality of fisheye lenses, transmitting image data to the cloud, and further used for controlling image acquisition, camera setting and camera management of the plurality of fisheye lenses respectively, the control module comprises a processor, the processor is connected with the controllers of the plurality of fisheye lenses respectively, and the processor is further connected with a wireless communication module and is in communication connection with a cloud server through the wireless communication module.
The three-dimensional modeling rapid space reconstruction system further comprises an input module connected with the processor, and the input module is used for inputting control information.
The three-dimensional modeling rapid spatial reconstruction system further comprises a display module connected with the processor.
The three-dimensional modeling rapid spatial reconstruction system further comprises a storage unit connected with the processor.
The input module and the display module may be a touch screen display, or may be a separate keyboard and display, and the display may display corresponding control menus and selection menus, where the corresponding menus correspond to program modules and instructions stored in the processor, and different menus correspond to different function modules, such as function modules for starting up, connecting a camera, connecting a network, and managing files.
In addition, the fisheye lens can be connected with the processor in a wired or wireless mode, and the processor receives image information collected by the fisheye lens and then transmits the image information to the cloud.
And the cloud end is used for receiving the image information acquired by the plurality of fisheye lenses, carrying out three-dimensional modeling on the acquired fisheye diagram and generating a three-dimensional space model.
Specifically, an End-to-End network model FlowNet2.0 system for deep learning is integrated in the cloud, and the collected fish eye images are subjected to optical flow estimation through the End-to-End network model FlowNet2.0 system to complete image splicing, so that a 360-degree seamless panoramic image is obtained, and a three-dimensional space model is generated.
The cloud end further comprises an image distortion removing module which is used for removing distortion of the received multiple fisheye images, unfolding the fisheye images into panoramas and then splicing the panoramas to form a 360-degree seamless panoramas and generate a three-dimensional model.
The three-dimensional modeling rapid space reconstruction system further comprises a display system, wherein the display system is used for displaying the space three-dimensional model generated by the cloud, and the display system can be realized by virtue of a display screen connected with the cloud server.
The above embodiments are merely representative of the centralized embodiments of the present invention, and the description thereof is specific and detailed, but it should not be understood as the limitation of the scope of the present invention, and it should be noted that those skilled in the art can make various changes and modifications without departing from the spirit of the present invention, and these changes and modifications all fall into the protection scope of the present invention. Therefore, the protection scope of the present patent should be subject to the appended claims.

Claims (10)

1. A method for three-dimensional modeling rapid spatial reconstruction is characterized by comprising the following steps:
s1, acquiring a dome screen image of the scene by using the plurality of fisheye lenses to obtain a plurality of fisheye diagrams of the scene;
s2, distortion of the multiple fish-eye patterns is removed, and the fish-eye patterns are expanded into a panoramic view;
and S3, utilizing deep learning to realize optical flow estimation, splicing the plurality of fisheye expansion images to obtain a 360-degree seamless panoramic image, and generating a three-dimensional space model.
2. The method for three-dimensional modeling rapid spatial reconstruction of claim 1, wherein: in the step S3, the optical flow estimation is realized by using a network model FlowNet2.0 of deep learning End-to-End, and a plurality of fish eye expansion diagrams are spliced to obtain a 360-degree seamless panoramic image, namely, a three-dimensional space model is generated.
3. The method for three-dimensional modeling rapid spatial reconstruction of claim 1, wherein: in the step S2, the distortion removal of the multiple fish-eye images is realized by adopting an image distortion correction algorithm, and MATLAB calibration is firstly carried out, and then OPENCV treatment is carried out.
4. A method for three-dimensional modeling fast spatial reconstruction as claimed in any one of claims 1-3 wherein: and S3, seamlessly splicing a plurality of pictures at different angles in the space adopted when the plurality of expanded fish-eye patterns are spliced.
5. A system for three-dimensional modeling with fast spatial reconstruction, comprising:
the spherical screen camera is composed of a plurality of fisheye lenses and is used for acquiring a plurality of fisheye diagrams of a shooting scene;
the control module is respectively connected with the fisheye lenses, is used for receiving fisheye patterns shot by the fisheye lenses, transmitting fisheye pattern data to a cloud end, and is also used for respectively controlling image acquisition, camera setting and camera management of the fisheye lenses;
and the cloud end is used for receiving the fisheye pattern information acquired by the plurality of fisheye lenses, performing three-dimensional modeling on the acquired fisheye pattern and generating a three-dimensional space model.
6. The system for three-dimensional modeling rapid spatial reconstruction of claim 5, wherein: the control module comprises a processor, the processor is respectively connected with the controllers of the plurality of fisheye lenses, and the processor is further connected with a wireless communication module and is in communication connection with the cloud server through the wireless communication module.
7. The system for three-dimensional modeling rapid spatial reconstruction of claim 6, wherein: the system also comprises a human-computer interaction module connected with the processor, and the human-computer interaction module is used for inputting control information, operating a control page and displaying operation information.
8. The system for three-dimensional modeling rapid spatial reconstruction of claim 5, wherein: the cloud-based three-dimensional model display system is used for displaying the space three-dimensional model generated by the cloud.
9. The system for three-dimensional modeling rapid spatial reconstruction of claim 5, wherein: and the cloud End is integrated with an End-to-End network model FlowNet2.0 system for deep learning, and the End-to-End network model FlowNet2.0 system is used for carrying out optical flow estimation on the acquired fisheye diagram to complete image splicing so as to obtain a 360-degree seamless panoramic view and generate a three-dimensional space model.
10. The system for three-dimensional modeling rapid spatial reconstruction of claim 5, wherein: the cloud is further used for carrying out distortion removal on the received multiple fisheye images, unfolding the fisheye images into a panoramic image and then splicing the panoramic image to form a 360-degree seamless panoramic image and generate a three-dimensional model.
CN201911318191.5A 2019-12-19 2019-12-19 Method and system for three-dimensional modeling rapid space reconstruction Pending CN110969696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911318191.5A CN110969696A (en) 2019-12-19 2019-12-19 Method and system for three-dimensional modeling rapid space reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911318191.5A CN110969696A (en) 2019-12-19 2019-12-19 Method and system for three-dimensional modeling rapid space reconstruction

Publications (1)

Publication Number Publication Date
CN110969696A true CN110969696A (en) 2020-04-07

Family

ID=70035270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911318191.5A Pending CN110969696A (en) 2019-12-19 2019-12-19 Method and system for three-dimensional modeling rapid space reconstruction

Country Status (1)

Country Link
CN (1) CN110969696A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129346A (en) * 2021-04-22 2021-07-16 北京房江湖科技有限公司 Depth information acquisition method and device, electronic equipment and storage medium
CN114565661A (en) * 2022-01-20 2022-05-31 华能汕头海门发电有限责任公司 Coal inventory system based on image acquisition
CN115086629A (en) * 2022-06-10 2022-09-20 谭健 Sphere multi-lens real-time panoramic three-dimensional imaging system
US20230281913A1 (en) * 2022-03-01 2023-09-07 Google Llc Radiance Fields for Three-Dimensional Reconstruction and Novel View Synthesis in Large-Scale Environments

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200454A (en) * 2014-05-26 2014-12-10 深圳市中瀛鑫科技股份有限公司 Fisheye image distortion correction method and device
CN106357966A (en) * 2016-11-01 2017-01-25 乐视控股(北京)有限公司 Panoramic image photographing device and panoramic image acquiring method
CN106713755A (en) * 2016-12-29 2017-05-24 北京疯景科技有限公司 Method and apparatus for processing panoramic image
CN107274337A (en) * 2017-06-20 2017-10-20 长沙全度影像科技有限公司 A kind of image split-joint method based on improvement light stream
CN107437273A (en) * 2017-09-06 2017-12-05 深圳岚锋创视网络科技有限公司 Six degree of freedom three-dimensional reconstruction method, system and the portable terminal of a kind of virtual reality
CN107507228A (en) * 2017-06-15 2017-12-22 清华大学 A kind of 3D vision generation methods based on light stream
CN107610045A (en) * 2017-09-20 2018-01-19 北京维境视讯信息技术有限公司 Luminance compensation method, device, equipment and storage medium in the splicing of flake picture
CN109102537A (en) * 2018-06-25 2018-12-28 中德人工智能研究院有限公司 A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200454A (en) * 2014-05-26 2014-12-10 深圳市中瀛鑫科技股份有限公司 Fisheye image distortion correction method and device
CN106357966A (en) * 2016-11-01 2017-01-25 乐视控股(北京)有限公司 Panoramic image photographing device and panoramic image acquiring method
CN106713755A (en) * 2016-12-29 2017-05-24 北京疯景科技有限公司 Method and apparatus for processing panoramic image
CN107507228A (en) * 2017-06-15 2017-12-22 清华大学 A kind of 3D vision generation methods based on light stream
CN107274337A (en) * 2017-06-20 2017-10-20 长沙全度影像科技有限公司 A kind of image split-joint method based on improvement light stream
CN107437273A (en) * 2017-09-06 2017-12-05 深圳岚锋创视网络科技有限公司 Six degree of freedom three-dimensional reconstruction method, system and the portable terminal of a kind of virtual reality
CN107610045A (en) * 2017-09-20 2018-01-19 北京维境视讯信息技术有限公司 Luminance compensation method, device, equipment and storage medium in the splicing of flake picture
CN109102537A (en) * 2018-06-25 2018-12-28 中德人工智能研究院有限公司 A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王志刚;夏汉铸;: "一种基于鱼眼图像的全景展开算法", no. 07 *
韩伟: "拟现实技术 VR全景实拍基础教程", 31 October 2019, 中国传媒大学出版社, pages: 79 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129346A (en) * 2021-04-22 2021-07-16 北京房江湖科技有限公司 Depth information acquisition method and device, electronic equipment and storage medium
CN114565661A (en) * 2022-01-20 2022-05-31 华能汕头海门发电有限责任公司 Coal inventory system based on image acquisition
US20230281913A1 (en) * 2022-03-01 2023-09-07 Google Llc Radiance Fields for Three-Dimensional Reconstruction and Novel View Synthesis in Large-Scale Environments
CN115086629A (en) * 2022-06-10 2022-09-20 谭健 Sphere multi-lens real-time panoramic three-dimensional imaging system
CN115086629B (en) * 2022-06-10 2024-02-27 谭健 Real-time panoramic three-dimensional imaging system with multiple spherical lenses

Similar Documents

Publication Publication Date Title
CN110969696A (en) Method and system for three-dimensional modeling rapid space reconstruction
CN110335343B (en) Human body three-dimensional reconstruction method and device based on RGBD single-view-angle image
KR101841668B1 (en) Apparatus and method for producing 3D model
CN106097425A (en) Power equipment information retrieval based on augmented reality and methods of exhibiting and system
TWI451358B (en) Banana codec
CN108594999B (en) Control method and device for panoramic image display system
KR101181967B1 (en) 3D street view system using identification information.
CN107103626A (en) A kind of scene reconstruction method based on smart mobile phone
CN111932664A (en) Image rendering method and device, electronic equipment and storage medium
CN111292427B (en) Bone displacement information acquisition method, device, equipment and storage medium
CN112802208B (en) Three-dimensional visualization method and device in terminal building
JP2019045991A (en) Generation device, generation method and program
CN106023307A (en) Three-dimensional model rapid reconstruction method and system based on field environment
CN109788270A (en) 3D-360 degree panorama image generation method and device
CN114782646A (en) House model modeling method and device, electronic equipment and readable storage medium
CN111563961A (en) Three-dimensional modeling method and related device for transformer substation
JP2021152935A (en) Information visualization system, information visualization method, and program
CN114693782A (en) Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system
Sharif et al. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry
CN103873773A (en) Primary-auxiliary synergy double light path design-based omnidirectional imaging method
CN116486018A (en) Three-dimensional reconstruction method, apparatus and storage medium
CN114339029B (en) Shooting method and device and electronic equipment
CN110191284A (en) Method, apparatus, electronic equipment and the storage medium of data acquisition are carried out to house
CN114898068A (en) Three-dimensional modeling method, device, equipment and storage medium
CN114663599A (en) Human body surface reconstruction method and system based on multiple views

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination