WO2020244273A1 - Système d'imagerie stéréoscopique tridimensionnelle à double caméra et procédé de traitement - Google Patents

Système d'imagerie stéréoscopique tridimensionnelle à double caméra et procédé de traitement Download PDF

Info

Publication number
WO2020244273A1
WO2020244273A1 PCT/CN2020/079099 CN2020079099W WO2020244273A1 WO 2020244273 A1 WO2020244273 A1 WO 2020244273A1 CN 2020079099 W CN2020079099 W CN 2020079099W WO 2020244273 A1 WO2020244273 A1 WO 2020244273A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
lens
camera lens
resolution
Prior art date
Application number
PCT/CN2020/079099
Other languages
English (en)
Chinese (zh)
Inventor
李应樵
陈增源
Original Assignee
万维科研有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 万维科研有限公司 filed Critical 万维科研有限公司
Publication of WO2020244273A1 publication Critical patent/WO2020244273A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance

Definitions

  • the invention belongs to the field of stereo imaging, and particularly relates to a dual-camera three-dimensional imaging system and processing method based on light field technology.
  • the system and software attached to the camera will combine the two two-dimensional images into a three-dimensional image, or combine two segments
  • Two-dimensional video is synthesized into three-dimensional video.
  • these two solutions may have the quality of the three-dimensional image and video generated by the asynchronous two-dimensional image or video, or the external factors such as the environmental lighting conditions.
  • More advanced imaging equipment such as light field camera, also known as plenoptic camera, uses microlens array lenses to capture the light field image of the scene at a time, and the depth information of the scene can be extracted through calculation to create a depth map And convert the two-dimensional image into a three-dimensional image.
  • the main disadvantages of this kind of light field camera equipment are that the image resolution will drop significantly, the parallax angle is small, and it is not suitable for shooting video.
  • the latest design is to add a reflecting unit to capture the multi-angle image of the target object. Because the parallax angle is large, it can produce a clearer depth map and three-dimensional image after processing. It is also suitable for shooting video. Attempts still failed to solve the problem of resolution drop.
  • the purpose of the present invention is to provide a dual-camera three-dimensional imaging system and processing method for improving the resolution of three-dimensional video.
  • the imaging system has a wide range of applications.
  • the present invention can also obtain high-quality video and three-dimensional image analysis.
  • the present invention provides a dual-camera three-dimensional imaging system, which is characterized in that it comprises: a light field imaging part for obtaining a first image and a high resolution imaging part for obtaining a second image; wherein the light field imaging part includes the first imaging part.
  • the first camera lens and the second camera or camera lens; the first camera lens and the second camera or camera lens are respectively located at the rear and front of the lens part, and an entrance pupil plane and matching device are placed between the two,
  • the entrance pupil plane and the matching device can be adapted to the different focal lengths of the second camera or camera lens, and an internal reflection unit is formed between the first camera lens and the entrance pupil plane and the matching device.
  • the captured first image is decomposed and refracted into a plurality of secondary images with different angular offsets
  • the high-resolution imaging part further includes a second imaging part and a third camera lens, and at least one capable of adjusting the first
  • the central axis adjustment device of the camera lens and the second camera or the double lens of the camera lens and the single lens of the third camera lens, the central axis adjusting device keeps the axes of the double lens and the single lens parallel; configuring the light
  • the field imaging part and the high-resolution imaging part enable the third camera lens to obtain a second image consistent with the vertical direction of the front view among the plurality of secondary images, and simultaneously output the plurality of secondary images And the second image.
  • the distance between the light field imaging part and the high-resolution imaging part is as close as possible, and their centers are located on the same vertical plane.
  • the angular offset range of the plurality of secondary images with different angular offsets is 10-20 degrees.
  • the angular offset of the front view in the plurality of secondary images is 0 degree.
  • the first imaging part further includes a first image sensor and a fly-eye lens that captures a first image; the fly-eye lens transmits the captured first image to the first image sensor; and the second imaging part further includes Second image sensor; the second image obtained by the third camera lens is transmitted to the second image sensor.
  • the fly-eye lens is a plurality of micro lens arrays, and the radius, thickness, and array pitch of each micro lens are related to the size of the first image sensor.
  • the aperture and focal length of the first camera lens and the second camera or camera lens are adjustable, and the second camera or camera lens and the third camera lens are replaceable lenses, and The aperture of the second camera or camera lens is larger than the size of the internal reflection unit.
  • the entrance pupil plane and the matching device is a pupil lens
  • the diameter of the pupil lens is larger than the diameter of the internal reflection unit
  • the incident light of the light field image is allowed to proceed in the internal reflection unit refraction.
  • each of the secondary images has subtle differences in the scene, and the size of the internal reflection unit and the focal length of each secondary image are calculated based on the following equations (1) and (2):
  • FOV is the field of view of the second camera or camera lens
  • n is the refractive index of the internal reflection unit
  • r is the number of internal reflections
  • Z is the size of the internal reflection unit
  • f lens is the focal length of the second camera or camera lens
  • f sub is the focal length of the secondary image.
  • the present invention also provides a dual-camera three-dimensional imaging processing method, the steps of which are: obtaining the original depth map data of the first image through the light field camera part; correcting the original depth map data; using edge-oriented or directional rendering
  • the method is to obtain a high-resolution depth map generated by interpolation; at the same time, a second image is obtained by using a high-resolution camera part, and a data model is used to combine the second image as reference data on the original depth map data of the first image. Correct until the best interpolated high-resolution depth map is obtained.
  • the three-dimensional imaging system and processing method provided by the present invention can provide higher-resolution two-dimensional and three-dimensional video, and at the same time, compared with the light field camera of the high-resolution image sensor, the cost increase is very limited; in addition, due to the The system does not affect the function of the light field camera part, so the information obtained by the light field camera itself can still be used to calculate the object depth and build a depth map.
  • Figure 1 is a perspective view of the three-dimensional imaging system of the present invention.
  • Figure 2 is a structural diagram of the three-dimensional imaging system of the present invention.
  • FIG. 3 is a schematic diagram of the first image 120 obtained by the three-dimensional imaging system of the present invention.
  • FIG. 4 is a schematic diagram after the three-dimensional imaging system according to the present invention performs normalization processing on the obtained first image 120.
  • FIG. 5 is a flowchart of processing the second image 130 by the three-dimensional imaging system of the present invention.
  • Fig. 6 is a flow chart of obtaining a target image by the three-dimensional imaging system of the present invention.
  • Figure 1 is a perspective view of the three-dimensional imaging system of the present invention.
  • the three-dimensional imaging system of the present invention is composed of a light field imaging part 100 that obtains a first image 120 (not shown in FIG. 1) and a high-resolution imaging part 140 that obtains a second image 130 (not shown in FIG. 1), wherein
  • the light field camera part 100 can adopt the light field camera in Chinese patent application 201711080588.6, which includes a first imaging part 110, a first camera lens 101 and a second camera or camera lens 103, where the first camera lens 101 is a rear camera lens; It has adjustable aperture and focal length.
  • the second camera or camera lens 103 is a front camera or camera lens, and the front and rear cameras or camera lenses can adjust the focal length of the camera.
  • the entrance pupil plane and the matching device 109 may be a pupil lens, and between the pupil lens 109 and the first camera lens 101 is an internal reflection unit 102.
  • the high-resolution imaging part 140 and the light field imaging part 100 are integrated and fixed together.
  • the high-resolution imaging part 140 includes a second imaging part 116.
  • the third camera lens of the high-resolution imaging part 140 is connected through the central axis adjustment device 118.
  • the lens center axis 112a (see FIG. 2) of 117 and the lens center axis 112b (see FIG. 2) of the first camera lens 101 and the second camera lens 103 in the light field imaging part 100 are kept parallel.
  • FIG. 2 is a structural diagram of the dual-camera three-dimensional imaging system of the present invention.
  • the light field imaging part 100 of the three-dimensional imaging system includes a first imaging part 110 and a lens part 111, wherein the first imaging part 110 includes a first image sensor 104; a fly-eye lens 105; wherein the first image sensor 104 uses a higher imaging quality High image sensor; fly-eye lens 105 is formed by a combination of a series of small lenses, capturing information of a certain image from different angles, such as light field image information, so as to strip out three-dimensional information to identify specific objects.
  • the fly-eye lens 105 is composed of a micro lens array and is designed to not only capture a light field image, but also generate a depth map.
  • the fly-eye lens 105 serves the first image sensor 104, so it is related to the parameters of the first image sensor 104.
  • each micro lens parameter of the fly-eye lens 105 has a radius of 0.5 millimeters, a thickness of 0.9 micrometers, and the array pitch of each micro lens is 60 micrometers.
  • the size of the fly-eye lens is retractable. In one embodiment, the size of the C-type image sensor using the advanced photography system is 25 mm ⁇ 17 mm; and in another embodiment, the size of the full-frame image sensor is 37 mm ⁇ 25 mm.
  • the lens part 111 is detachably connected with the first imaging part 110.
  • the pupil lens 109 may be a single lens, which has a condensing effect and can compress the information received by the second camera or camera lens 103.
  • An imaging process is performed at the second camera or camera lens 103, and as the second camera or camera lens 103 is replaced or replaced, the imaging angle is different.
  • the first camera lens 101 is a short-focus lens or a macro lens, which is fixed on a housing (not shown in FIG. 2).
  • the design of the first camera lens 101 determines the size of the imaging system of the present invention.
  • a secondary imaging process is performed at the first camera lens 101.
  • the entrance pupil plane and the matching device 109 are designed to correct light rays.
  • the internal reflection unit 102 decomposes and reflects the image to be taken into a multi-angle image with independent secondary images with different angular offsets.
  • the internal reflection unit 102 is designed to provide multiple virtual images at different viewing angles.
  • the size and ratio of the internal reflection unit 102 are the determining factors of the number of reflections and the reflection image ratio, and images of different angles are produced.
  • the secondary image produced by each reflection has a subtle difference in the scene, and the target image has a slight offset.
  • the size of the internal reflection unit 102 and the focal length of each secondary image can be calculated based on the following equations (1) and (2):
  • FOV is the field of view of the second camera or camera lens
  • n is the refractive index of the internal reflection unit
  • r is the number of internal reflections
  • X, Y, Z are the dimensions of the internal reflection unit, which are width, height, and length respectively;
  • f lens is the focal length of the second camera or camera lens
  • f sub is the focal length of the secondary image.
  • the size of the internal reflection unit 102 can be the same as the size of the first image sensor 104, and in one embodiment, it can be 24 mm (width) x 36 mm (height) x 95 mm (length), that is to say the ratio of the unit It is about 2:3:8.
  • the pupil lens 109 is used to match the size of the secondary image with the size of the internal reflection unit 102, and to perform reflection in the internal reflection unit 102 correctly. To achieve this, the diameter of the pupil lens 109 should be larger than the internal reflection unit 102. In one of the embodiments, the pupil lens 109 has a diameter of approximately 50 mm and a focal length of 50 mm. As long as the aperture of the second camera or camera lens 103 is larger than the size of the internal reflection unit 102, the second camera or camera lens 103 is designed to be able to be replaced by any camera or camera lens.
  • the high-resolution imaging section 140 includes a second imaging section 116, a second image sensor 119, and a third camera lens 117.
  • the adjusting device 118 for adjusting the central axis of the dual lens of the first camera lens 101 and the second camera or the camera lens 103 and the single lens of the third camera lens 117 is located outside the high-resolution imaging part 140 and is independent of light field imaging
  • the part 100 and the high-resolution imaging part 140 can make the axis 112b of the first camera lens 101, the second camera or the camera lens 103 parallel to the axis 112a of the third camera lens 117 by adjusting the adjustment device 118.
  • the second image sensor 119 can be a sensor with the same or different specifications as the first image sensor 104, but the resolution of the second image sensor 119 should be at least 1/9 or more of the resolution of the first camera sensor 104 to achieve higher cost.
  • FIG. 3 is a schematic diagram of the first image 120 obtained by the light field imaging part 100 of the three-dimensional imaging system of the present invention.
  • the internal reflection unit 102 in the light field imaging part 100 decomposes the captured first image 120, that is, the light field image or video picture, and reflects it into multiple secondary images or video pictures with different angular offsets, such as 9
  • Two secondary images or video frames are acquired by the first image sensor 104 of the first imaging part 110 through a fly-eye lens.
  • the secondary image 1 in the middle of the 9 secondary images or video images is the front view of the scene, and the remaining 8 secondary images 2--9 or video images are secondary images or video images offset by a specific angle.
  • each image or video screen has a resolution of 1/9 or lower of the first image sensor 104.
  • 9 secondary images or video frames are segmented and each secondary image is preprocessed.
  • FIG. 4 is a schematic diagram of the first image 120 obtained by the light field imaging part 100 of the three-dimensional imaging system according to the present invention after normalization processing. Normalize each secondary image through the following equation (3):
  • each secondary image is an independent original compound eye image.
  • image processing techniques including but not limited to image noise removal are used for preprocessing, and then synthetic aperture technology is used for decoding, the light field information in the original image of the compound eye can be obtained, and digital refocusing technology can be used to generate the secondary focus image.
  • the synthetic aperture image can be digitally refocused using the following principles:
  • I′(x′,y′) ⁇ L(u,v,kx′+(1-k)u,ky′+(1-k)v)dudv (7)
  • the coordinate system of the secondary imaging surface
  • L and L' represent the energy of the primary and secondary imaging surfaces.
  • the stereo matching algorithm is one of the commonly used binocular stereo vision matching algorithms, which provides good parallax effects and ideal calculation speed.
  • D represents the disparity map
  • p and q are a certain pixel in the image
  • C(p, D p ) represents the cost value of the pixel when the disparity value of the current pixel is D p ;
  • N p represents the pixels adjacent to the pixel p, usually 8.
  • P1 and P2 are penalty coefficients. P1 applies to pixel p and its neighboring pixels with a difference of disparity equal to 1, and P2 applies to pixel p and its neighboring pixels with a difference of disparity greater than 1;
  • T[.] is a function. If the parameter in the function is true, it returns 1, otherwise it returns 0.
  • the final conversion between the disparity value and the depth value can use the following formula:
  • d p represents the depth value of a certain pixel
  • f is the normalized focal length
  • b is the baseline distance between the two secondary images
  • D p represents the disparity value of the current pixel.
  • the second image 130 obtained by the second imaging part 116 is a 2D image or a video screen, which can completely reflect the information of the subject, so the resolution of the second image will not be reduced, and for the same reason, the second image is not required. 130 for standardization and refocusing.
  • each secondary image has only 1/9 or lower resolution of the sensor.
  • the resolution of each secondary image is 1280x720 pixels. If the scene depth map is directly generated And light field video, its resolution will also be limited by 1280x720 pixels. Therefore, first use the secondary image 1-9 to establish a 1280x720 scene depth map, and then refer to the 3840x2160 pixel high-resolution second image 130 obtained by the second image sensor 119, and use the edge-directed interpolation algorithm to improve the depth map.
  • the resolution is up to 3840x2160.
  • the formula used for edge-directed interpolation is as follows:
  • m and n are low-resolution and high-resolution image grids before and after interpolation
  • y[n] represents the depth map generated after interpolation
  • S and R respectively represent the data model of the second image 130 and the operator of the edge-directed rendering step
  • is the gain of the correction process
  • k is the iteration index.
  • the accuracy of the correction step of the interpolation calculation is sufficient to meet the needs of generating a 3D image.
  • the high-resolution second image 130 that is, a 2D image or video image
  • the high-resolution second image 130 is combined with the increased resolution depth map to generate a high-resolution 2D+Z 3D image or video format and output to the display, which can greatly improve the light field image or
  • the resolution of the light field video is up to 3840x2160 pixels.
  • Fig. 5 is a flowchart of the three-dimensional imaging system of the present invention processing a target image.
  • step 501 the original depth map data of the first image 120 is obtained through the first imaging part 100; in step 502, the original depth map data is corrected; in step 503, an edge-oriented or directional rendering method is used, and in step 504 Obtain a high-resolution depth map generated by interpolation; where in step 505, the data model of the second image is used to perform reference data on the second image obtained in step 506; and the original depth map data of the first image 120 Correct until the best interpolated high-resolution depth map is obtained.
  • Fig. 6 is a flow chart of obtaining a target image by the three-dimensional imaging system of the present invention.
  • the first image sensor 104 acquires a first image 120 containing 9 secondary images or video frames; in step 602, the 9 secondary images or video frames are divided and each secondary image is standardized
  • step 603 perform image noise removal processing on each secondary image; in step 604, use synthetic aperture technology to decode the light field information acquired by the 9 secondary images, and then use digital refocusing technology to generate an in-focus image; in step 605.
  • Use 9 in-focus secondary images to establish a lower-resolution scene depth map combine the second image of the high-resolution 2D image or video screen obtained by the second image sensor obtained in step 608; in step 606, referring to the second image, use an edge-oriented or edge-oriented interpolation algorithm to increase the resolution of the depth map; in step 607, combine the depth map with the increased resolution and the second image to generate a high-resolution 3D image or video .

Abstract

La présente invention concerne un système d'imagerie stéréoscopique tridimensionnel à double caméra et un procédé de traitement ; un élément de capture d'image de champ lumineux comprend une première partie d'imagerie, une première lentille de caméra et une seconde caméra ou lentille de caméra ; la première lentille de caméra et la seconde caméra ou la lentille de caméra sont situées au niveau d'une partie arrière et d'une partie avant d'une partie de lentille respectivement, un plan de pupille d'entrée et un dispositif d'adaptation sont placés entre ceux-ci, une unité de réflexion interne est formée entre la première lentille de caméra, le plan de pupille d'entrée et le dispositif d'adaptation, et une partie de capture d'image à haute résolution comprenant en outre une seconde partie d'imagerie et une troisième lentille de caméra ; la partie de capture d'image de champ lumineux et la partie de capture d'image haute résolution sont configurées de telle sorte que la troisième lentille de caméra obtient une seconde image qui est cohérente avec la direction verticale d'une vue avant dans une pluralité d'images secondaires, et en même temps, la pluralité d'images secondaires et la seconde image sont délivrées. En plus de l'obtention d'informations de profondeur précises, la présente invention peut également obtenir une vidéo haute résolution pour analyser une imagerie tridimensionnelle.
PCT/CN2020/079099 2019-06-04 2020-03-13 Système d'imagerie stéréoscopique tridimensionnelle à double caméra et procédé de traitement WO2020244273A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910481518.4A CN112040214A (zh) 2019-06-04 2019-06-04 双摄像机三维立体成像系统和处理方法
CN201910481518.4 2019-06-04

Publications (1)

Publication Number Publication Date
WO2020244273A1 true WO2020244273A1 (fr) 2020-12-10

Family

ID=73576536

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079099 WO2020244273A1 (fr) 2019-06-04 2020-03-13 Système d'imagerie stéréoscopique tridimensionnelle à double caméra et procédé de traitement

Country Status (2)

Country Link
CN (1) CN112040214A (fr)
WO (1) WO2020244273A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114404084A (zh) * 2022-01-21 2022-04-29 北京大学口腔医学院 一种扫描装置及扫描方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381965A1 (en) * 2014-06-27 2015-12-31 Qualcomm Incorporated Systems and methods for depth map extraction using a hybrid algorithm
CN106651938A (zh) * 2017-01-17 2017-05-10 湖南优象科技有限公司 一种融合高分辨率彩色图像的深度图增强方法
CN107689050A (zh) * 2017-08-15 2018-02-13 武汉科技大学 一种基于彩色图像边缘引导的深度图像上采样方法
CN107991838A (zh) * 2017-11-06 2018-05-04 万维科研有限公司 自适应三维立体成像系统
CN108805921A (zh) * 2018-04-09 2018-11-13 深圳奥比中光科技有限公司 图像获取系统及方法
CN109074661A (zh) * 2017-12-28 2018-12-21 深圳市大疆创新科技有限公司 图像处理方法和设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595171B (zh) * 2012-02-03 2014-05-14 浙江工商大学 一种多通道空时编码孔径的动态光场成像方法和成像系统
CN102663712B (zh) * 2012-04-16 2014-09-17 天津大学 基于飞行时间tof相机的深度计算成像方法
CN106780383B (zh) * 2016-12-13 2019-05-24 长春理工大学 Tof相机的深度图像增强方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381965A1 (en) * 2014-06-27 2015-12-31 Qualcomm Incorporated Systems and methods for depth map extraction using a hybrid algorithm
CN106651938A (zh) * 2017-01-17 2017-05-10 湖南优象科技有限公司 一种融合高分辨率彩色图像的深度图增强方法
CN107689050A (zh) * 2017-08-15 2018-02-13 武汉科技大学 一种基于彩色图像边缘引导的深度图像上采样方法
CN107991838A (zh) * 2017-11-06 2018-05-04 万维科研有限公司 自适应三维立体成像系统
CN109074661A (zh) * 2017-12-28 2018-12-21 深圳市大疆创新科技有限公司 图像处理方法和设备
CN108805921A (zh) * 2018-04-09 2018-11-13 深圳奥比中光科技有限公司 图像获取系统及方法

Also Published As

Publication number Publication date
CN112040214A (zh) 2020-12-04

Similar Documents

Publication Publication Date Title
WO2019100933A1 (fr) Procédé, dispositif et système de mesure en trois dimensions
US10565734B2 (en) Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
CN106303228B (zh) 一种聚焦型光场相机的渲染方法和系统
TWI419551B (zh) 固態全景影像擷取裝置
US8897502B2 (en) Calibration for stereoscopic capture system
CN102164298B (zh) 全景成像系统中基于立体匹配的元素图像获取方法
TWI660629B (zh) 自我調整三維立體成像系統
US20040001138A1 (en) Stereoscopic panoramic video generation system
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
JP2009300268A (ja) 3次元情報検出装置
JP2003502925A (ja) 一台の携帯カメラによる3d情景の撮影法
CN102243432A (zh) 全景立体摄像装置
WO2018032841A1 (fr) Procédé, dispositif et système de tracé d'image tridimensionnelle
CN108805921B (zh) 图像获取系统及方法
KR102176963B1 (ko) 수평 시차 스테레오 파노라마를 캡쳐하는 시스템 및 방법
CN114359406A (zh) 自动对焦双目摄像头的标定、3d视觉及深度点云计算方法
JP2010181826A (ja) 立体画像形成装置
WO2020244273A1 (fr) Système d'imagerie stéréoscopique tridimensionnelle à double caméra et procédé de traitement
JP2011141381A (ja) 立体画像表示装置及び立体画像表示方法
CN110553585A (zh) 一种基于光学阵列的3d信息获取装置
CN104469340A (zh) 一种立体视频共光心成像系统及其成像方法
KR102031485B1 (ko) 360도 카메라와 평면 거울을 이용한 다시점 영상 획득 장치 및 방법
AU2013308155B2 (en) Method for description of object points of the object space and connection for its implementation
WO2022100668A1 (fr) Procédé, appareil et système de mesure de température, support de stockage et produit-programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20817940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20817940

Country of ref document: EP

Kind code of ref document: A1