CN111652967B - Three-dimensional reconstruction system and method based on front-back fusion imaging - Google Patents

Three-dimensional reconstruction system and method based on front-back fusion imaging Download PDF

Info

Publication number
CN111652967B
CN111652967B CN202010413235.9A CN202010413235A CN111652967B CN 111652967 B CN111652967 B CN 111652967B CN 202010413235 A CN202010413235 A CN 202010413235A CN 111652967 B CN111652967 B CN 111652967B
Authority
CN
China
Prior art keywords
module
dimensional reconstruction
fusion imaging
dimensional
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010413235.9A
Other languages
Chinese (zh)
Other versions
CN111652967A (en
Inventor
王嘉辉
孙梓瀚
郭祥
江灏
蔡志岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202010413235.9A priority Critical patent/CN111652967B/en
Publication of CN111652967A publication Critical patent/CN111652967A/en
Application granted granted Critical
Publication of CN111652967B publication Critical patent/CN111652967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional reconstruction system based on front-rear fusion imaging, which comprises a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein the frame body is provided with a bearing space, and the bearing space is used for placing articles; the fusion imaging module shoots a foreground image of the object and a background image of the object, and is arranged on the inner side of the frame body and connected with the frame body; the three-dimensional reconstruction module is electrically connected with the fusion imaging module through combining the foreground image of the object and the background image of the object to obtain a three-dimensional model of the object. The invention also discloses a three-dimensional reconstruction method based on front-back fusion imaging, which comprises the following steps: s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object; s2: and taking out the calibration object, placing a target object on the position of the calibration object, and forming a three-dimensional model of the object through the three-dimensional reconstruction system after adjustment.

Description

Three-dimensional reconstruction system and method based on front-back fusion imaging
Technical Field
The invention relates to the field of three-dimensional reconstruction, in particular to a three-dimensional reconstruction system and method based on front-back fusion imaging.
Background
In recent years, with rapid development of computer vision technology, three-dimensional reconstruction is an important branch in the field of computer vision, and is being widely applied to the fields of 3D printing, medical technology, remote sensing technology and the like, and is receiving high attention.
Current three-dimensional reconstruction techniques can be largely divided into depth camera-based three-dimensional reconstruction, time-based reconstruction, and space-based reconstruction. Three-dimensional reconstruction based on a depth camera is also called an active reconstruction technology, and depth information extraction is realized by acquiring reflection intensity information of an energy source through acquisition equipment by means of auxiliary energy sources such as laser, sound waves and the like. Depth camera apparatuses such as kinect in the current market are all realized using this principle. The time-based reconstruction and the space-based reconstruction can be completed only by a common camera, and the hardware requirement is relatively low. The time-based reconstruction method mainly realizes parallax image acquisition through the inter-frame parallax generated in the motion process of the same camera, and then performs inter-frame matching reconstruction. The method can realize three-dimensional reconstruction of the object by a single camera, but needs the relative position change of the camera and the object and records the change parameters for three-dimensional reconstruction, and has certain requirements on shooting motion conditions. A larger range of movement is required for reconstructing a larger range of surface, and a plurality of frames are required for operation, which increases the complexity of the process of object shooting and the information processing amount of reconstruction. The space-based reconstruction method mainly adopts the principle that a plurality of cameras are used for shooting a reconstructed object at different angles, a picture group with parallax is obtained, and a transformation matrix for converting two-dimensional image coordinates into three-dimensional coordinates is constructed by combining the position relations of the cameras to reconstruct the surface of the object. The main problem with this approach is the contradictory relationship between camera shooting angle and reconstructed surface range. If the use of cameras is reduced to save the cost, a binocular vision system is constructed by using at least two cameras, the surface area acquired by the binocular vision system is relatively limited, and the reconstruction range of a single group of images is smaller; if reconstruction with a larger range is to be realized, more machine positions are needed to take surface pictures with different angles, the requirement on equipment cost is high, and the number of processed pictures is increased. In summary, both the current time-based reconstruction method and the space-based reconstruction method have the problems of limited reconstruction operation complexity and limited reconstruction surface range.
Disclosure of Invention
In order to overcome the defects of the prior art and the method, the invention provides a three-dimensional reconstruction system and a three-dimensional reconstruction method based on front-back fusion imaging. According to the invention, under the condition that an object or shooting equipment does not need to be moved, the three-dimensional model of the target object can be obtained by only one-time imaging.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a three-dimensional reconstruction system based on front-back fusion imaging is used for forming a three-dimensional model of an article, and comprises a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein,
the frame body is provided with a bearing space, and the bearing space is used for placing articles;
the fusion imaging module shoots a foreground image of an article and a background image of the article, is arranged on the inner side of the frame body and is connected with the frame body;
the three-dimensional reconstruction module is electrically connected with the fusion imaging module through combining the foreground image of the object and the background image of the object to obtain a three-dimensional model of the object.
According to the invention, under the condition that an object or shooting equipment does not need to be moved, the three-dimensional model of the target object can be obtained by only one-time imaging.
In a preferred embodiment, the frame is made of a light-impermeable material.
In the preferred scheme, the opaque material can prevent light rays of external environment from entering the bearing space to influence the image capturing of the fusion imaging module.
In a preferred scheme, the three-dimensional reconstruction system further comprises a light supplementing module, wherein the light supplementing module is used for supplementing light, the light supplementing module is arranged on the inner side of the frame body, and the light supplementing module is connected with the frame body.
In this preferred scheme, can solve the problem that bears space light intensity not enough through light filling module.
In a preferred embodiment, the brightness of the light compensating module is adjustable.
In a preferred scheme, the light supplementing module is a light source with high color rendering index.
In the preferred scheme, the light source with high color rendering index is beneficial to improving the reduction degree of the reconstructed photo to the scene.
In a preferred embodiment, the inner side of the frame is coated with a non-reflective material.
In the preferred scheme, the problem that light reflection exists in the working process of the light supplementing module can possibly cause influence on image capturing of the fusion imaging module. Therefore, a non-reflective material needs to be laid on the inner side of the frame body, so that the problem of light reflection caused by the light supplementing module is solved.
In a preferred embodiment, the fusion imaging module includes a binocular shooting sub-module and two mirrors, wherein,
the binocular shooting sub-module is arranged in front of the object and is used for acquiring a foreground image of the object;
the two reflectors are arranged at the rear of the object, and the reflectors are used for acquiring the background image of the object through reflection and combining with the binocular shooting submodule.
In a preferred embodiment, the reflecting mirrors are arranged vertically and horizontally, and the angle formed by the two reflecting mirrors is smaller than 180 °.
The invention also discloses a three-dimensional reconstruction method based on the front-back fusion imaging based on the three-dimensional reconstruction system, which comprises the following steps:
s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object;
s2: and taking out the calibration object, placing a target object on the position of the calibration object, and forming a three-dimensional model of the object through the three-dimensional reconstruction system after adjustment.
In a preferred embodiment, S2 includes the following substeps:
s2.1: taking out the calibration object, placing the target object at the position of the calibration object, acquiring a foreground image and a background image of the target object by the fusion imaging module, inputting the foreground image and the background image into the three-dimensional reconstruction module for algorithm reconstruction, and acquiring three-dimensional reconstruction point cloud data of the foreground image of the target object and three-dimensional reconstruction point cloud data of the background image of the target object;
s2.2: in the reconstruction process, the input image carries out distortion correction on the image through the internal parameters of the adjusted fusion imaging module, and errors caused by lens distortion are eliminated;
s2.3: transforming the coordinates of the two-dimensional image into three-dimensional point cloud coordinates through the adjusted external parameters of the fusion imaging module;
s2.4: filtering false matching points in the three-dimensional point cloud coordinate conversion process through a point cloud filtering algorithm;
s2.5: and packaging the spliced complete point cloud data to form a three-dimensional model.
In a preferred scheme, the fusion imaging module comprises a binocular shooting sub-module and two reflecting mirrors, and forms a left camera foreground image, a right camera foreground image, a left camera background image and a right camera background image for a target object; the S2.2 also comprises the following contents:
after distortion correction, extracting and matching the characteristics of a target object in the image, and obtaining a matching point of matching the left camera foreground image and the right camera foreground image or a matching point of matching the left camera background image and the right camera background image.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the invention, under the condition that an object or shooting equipment does not need to be moved, the three-dimensional model of the target object can be obtained by only one-time imaging.
Drawings
Fig. 1 is a flow chart of example 2.
Fig. 2 is a block diagram of embodiment 1.
Fig. 3 is a schematic diagram of example 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent; for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 2, a three-dimensional reconstruction system based on front-back fusion imaging is used for forming a three-dimensional model of an article, and comprises a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein,
the frame body is provided with a bearing space for placing articles;
the fusion imaging module shoots a foreground image of the object and a background image of the object, and is arranged on the inner side of the frame body and connected with the frame body;
the three-dimensional reconstruction module is electrically connected with the fusion imaging module through combining the foreground image of the object and the background image of the object to obtain a three-dimensional model of the object.
Embodiment 1 a three-dimensional model of a target object can be obtained by performing imaging only once without the need to perform movement of the object or photographing apparatus.
In embodiment 1, the following extensions can also be made: the frame body is made of light-proof material.
In this modified embodiment, the opaque material can prevent light of external environment from entering the bearing space, and influence the image capturing of the fusion imaging module.
In embodiment 1 and the above modified embodiment, the following expansion can also be performed: the three-dimensional reconstruction system further comprises a light supplementing module, wherein the light supplementing module is used for supplementing light, the light supplementing module is arranged on the inner side of the frame body, and the light supplementing module is connected with the frame body.
In this embodiment of improvement, can solve the problem that bears space light intensity not enough through light filling module.
In embodiment 1 and the above modified embodiment, the following expansion can also be performed: the brightness of the light supplementing module is adjustable.
In embodiment 1 and the above modified embodiment, the following expansion can also be performed: the light supplementing module is a light source with high color rendering index.
In the improved embodiment, the light source with high color rendering index is beneficial to improving the reduction degree of the reconstructed photo to the scene.
In embodiment 1 and the above modified embodiment, the following expansion can also be performed: the inner side of the frame body is coated with a non-reflective material.
In this improved embodiment, the problem of light reflection existing in the working process of the light supplementing module may affect the image capturing of the fusion imaging module. Therefore, a non-reflective material needs to be laid on the inner side of the frame body, so that the problem of light reflection caused by the light supplementing module is solved.
In embodiment 1 and the above modified embodiment, the following expansion can also be performed: the fusion imaging module comprises a binocular shooting sub-module and two reflectors, wherein,
the binocular shooting sub-module is arranged in front of the object and is used for acquiring a foreground image of the object;
the two reflectors are arranged at the rear of the object, and the reflectors are used for acquiring the background image of the object through reflection and combining with the binocular shooting submodule.
In a preferred embodiment, the mirrors are arranged vertically to the horizontal plane and the angle between the two mirrors is less than 180 °.
Example 2
As shown in fig. 2 to 3, embodiment 2 is a method based on embodiment 1, and a three-dimensional reconstruction method based on front-back fusion imaging, comprising the steps of:
s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object;
s2: and taking out the calibration object, placing a target object on the position of the calibration object, and forming a three-dimensional model of the object through the three-dimensional reconstruction system after adjustment.
In example 2, the following extensions can also be made: s2 comprises the following substeps:
s2.1: taking out the calibration object, placing the target object at the position of the calibration object, acquiring a foreground image and a background image of the target object by the fusion imaging module, inputting the foreground image and the background image into the three-dimensional reconstruction module for algorithm reconstruction, and acquiring three-dimensional reconstruction point cloud data of the foreground image of the target object and three-dimensional reconstruction point cloud data of the background image of the target object;
s2.2: in the reconstruction process, the input image carries out distortion correction on the image through the internal parameters of the adjusted fusion imaging module, and errors caused by lens distortion are eliminated;
s2.3: transforming the coordinates of the two-dimensional image into three-dimensional point cloud coordinates through the adjusted external parameters of the fusion imaging module;
s2.4: filtering false matching points in the three-dimensional point cloud coordinate conversion process through a point cloud filtering algorithm;
s2.5: and packaging the spliced complete point cloud data to form a three-dimensional model.
In embodiment 2 and the above modified embodiment, the following expansion can also be performed: the fusion imaging module comprises a binocular shooting sub-module and two reflectors, and forms a left camera foreground image, a right camera foreground image, a left camera background image and a right camera background image for the target object; s2.2 further comprises the following:
after distortion correction, extracting and matching the characteristics of a target object in the image, and obtaining a matching point of matching the left camera foreground image and the right camera foreground image or a matching point of matching the left camera background image and the right camera background image.
In the specific content of the above embodiment, any combination of the technical features may be performed without contradiction, and for brevity of description, all possible combinations of the technical features are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The same or similar reference numerals correspond to the same or similar components;
the terms describing the positional relationship in the drawings are merely illustrative, and are not to be construed as limiting the present patent; for example, the calculation formula of the ion conductivity in the embodiment is not limited to the formula exemplified in the embodiment, and the calculation formulas of the ion conductivities of different kinds are different from each other. The above description of example embodiments is not to be taken as limiting the present patent.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (7)

1. A three-dimensional reconstruction system based on front-back fusion imaging is used for forming a three-dimensional model of an article and is characterized by comprising a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein,
the frame body is provided with a bearing space, and the bearing space is used for placing articles;
the fusion imaging module shoots a foreground image of an article and a background image of the article, is arranged on the inner side of the frame body and is connected with the frame body;
the three-dimensional reconstruction module is electrically connected with the fusion imaging module through combining the foreground image of the article and the background image of the article to obtain a three-dimensional model of the article;
the three-dimensional reconstruction method based on front-back fusion imaging applied by the three-dimensional reconstruction system comprises the following steps:
s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object;
s2: taking out the calibration object, placing a target object on the position of the calibration object, and forming a three-dimensional model of the object through an adjusted three-dimensional reconstruction system;
the step S2 comprises the following substeps:
s2.1: taking out the calibration object, placing the target object at the position of the calibration object, acquiring a foreground image and a background image of the target object by the fusion imaging module, inputting the foreground image and the background image into the three-dimensional reconstruction module for algorithm reconstruction, and acquiring three-dimensional reconstruction point cloud data of the foreground image of the target object and three-dimensional reconstruction point cloud data of the background image of the target object;
s2.2: in the reconstruction process, the input image carries out distortion correction on the image through the internal parameters of the adjusted fusion imaging module, and errors caused by lens distortion are eliminated;
s2.3: transforming the coordinates of the two-dimensional image into three-dimensional point cloud coordinates through the adjusted external parameters of the fusion imaging module;
s2.4: filtering false matching points in the three-dimensional point cloud coordinate conversion process through a point cloud filtering algorithm;
s2.5: and packaging the spliced complete point cloud data to form a three-dimensional model.
2. The three-dimensional reconstruction system according to claim 1, wherein the frame is made of an opaque material.
3. The three-dimensional reconstruction system according to claim 1 or 2, further comprising a light supplementing module, wherein the light supplementing module is used for supplementing illumination, the light supplementing module is disposed at the inner side of the frame, and the light supplementing module is connected with the frame.
4. The three-dimensional reconstruction system according to claim 3, wherein the inside of the frame is coated with a non-reflective material.
5. The three-dimensional reconstruction system according to claim 1, 2 or 4, wherein the fusion imaging module comprises a binocular shooting sub-module and two mirrors, wherein,
the binocular shooting sub-module is arranged in front of the object and is used for acquiring a foreground image of the object;
the two reflectors are arranged at the rear of the object, and the reflectors are used for acquiring the background image of the object through reflection and combining with the binocular shooting submodule.
6. The three-dimensional reconstruction system according to claim 5, wherein the mirrors are disposed vertically and horizontally and the angle between the two mirrors is less than 180 °.
7. The three-dimensional reconstruction system according to claim 1, wherein the fusion imaging module comprises a binocular shooting sub-module and two reflectors, and the fusion imaging module forms a left camera foreground image, a right camera foreground image, a left camera background image and a right camera background image for the target object; the S2.2 also comprises the following contents:
after distortion correction, extracting and matching the characteristics of a target object in the image, and obtaining a matching point of matching the left camera foreground image and the right camera foreground image or a matching point of matching the left camera background image and the right camera background image.
CN202010413235.9A 2020-05-15 2020-05-15 Three-dimensional reconstruction system and method based on front-back fusion imaging Active CN111652967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010413235.9A CN111652967B (en) 2020-05-15 2020-05-15 Three-dimensional reconstruction system and method based on front-back fusion imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010413235.9A CN111652967B (en) 2020-05-15 2020-05-15 Three-dimensional reconstruction system and method based on front-back fusion imaging

Publications (2)

Publication Number Publication Date
CN111652967A CN111652967A (en) 2020-09-11
CN111652967B true CN111652967B (en) 2023-07-04

Family

ID=72347970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010413235.9A Active CN111652967B (en) 2020-05-15 2020-05-15 Three-dimensional reconstruction system and method based on front-back fusion imaging

Country Status (1)

Country Link
CN (1) CN111652967B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612233A (en) * 2022-02-09 2023-08-18 比亚迪股份有限公司 Three-dimensional modeling method, electronic device, system and storage medium
CN116612263B (en) * 2023-07-20 2023-10-10 北京天图万境科技有限公司 Method and device for sensing consistency dynamic fitting of latent vision synthesis

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887071A (en) * 2019-01-12 2019-06-14 天津大学 A kind of 3D video image dendoscope system and three-dimensional rebuilding method
CN109993696A (en) * 2019-03-15 2019-07-09 广州愿托科技有限公司 The apparent panorama sketch of works based on multi-view image corrects joining method
WO2019179200A1 (en) * 2018-03-22 2019-09-26 深圳岚锋创视网络科技有限公司 Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019179200A1 (en) * 2018-03-22 2019-09-26 深圳岚锋创视网络科技有限公司 Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device
CN109887071A (en) * 2019-01-12 2019-06-14 天津大学 A kind of 3D video image dendoscope system and three-dimensional rebuilding method
CN109993696A (en) * 2019-03-15 2019-07-09 广州愿托科技有限公司 The apparent panorama sketch of works based on multi-view image corrects joining method

Also Published As

Publication number Publication date
CN111652967A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
Jiang et al. Learning to see moving objects in the dark
CN110349251B (en) Three-dimensional reconstruction method and device based on binocular camera
US20200219301A1 (en) Three dimensional acquisition and rendering
EP3057317B1 (en) Light-field camera
US9870602B2 (en) Method and apparatus for fusing a first image and a second image
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
KR20170005009A (en) Generation and use of a 3d radon image
WO2011010438A1 (en) Parallax detection apparatus, ranging apparatus and parallax detection method
KR20160090373A (en) Photographing method for dual-camera device and dual-camera device
CN109325981B (en) Geometric parameter calibration method for micro-lens array type optical field camera based on focusing image points
CN111652967B (en) Three-dimensional reconstruction system and method based on front-back fusion imaging
US9807372B2 (en) Focused image generation single depth information from multiple images from multiple sensors
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN105430298A (en) Method for simultaneously exposing and synthesizing HDR image via stereo camera system
US20200267297A1 (en) Image processing method and apparatus
Chowdhury et al. Fixed-Lens camera setup and calibrated image registration for multifocus multiview 3D reconstruction
CN109302600B (en) Three-dimensional scene shooting device
CN101916035A (en) Stereo pick-up device and method
TWI504936B (en) Image processing device
CN110708532A (en) Universal light field unit image generation method and system
CN111562562B (en) 3D imaging module calibration method based on TOF
CN109089100B (en) Method for synthesizing binocular stereo video
CN116051916A (en) Training data acquisition method, model training method and parallax image acquisition method
CN109274954B (en) Foveola monocular stereoscopic imaging system
CN104519332B (en) Method for generating view angle translation image and portable electronic equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant