WO2018032841A1 - Procédé, dispositif et système de tracé d'image tridimensionnelle - Google Patents

Procédé, dispositif et système de tracé d'image tridimensionnelle Download PDF

Info

Publication number
WO2018032841A1
WO2018032841A1 PCT/CN2017/085147 CN2017085147W WO2018032841A1 WO 2018032841 A1 WO2018032841 A1 WO 2018032841A1 CN 2017085147 W CN2017085147 W CN 2017085147W WO 2018032841 A1 WO2018032841 A1 WO 2018032841A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
viewpoint
color
invisible
color image
Prior art date
Application number
PCT/CN2017/085147
Other languages
English (en)
Chinese (zh)
Inventor
黄源浩
肖振中
刘龙
许星
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2018032841A1 publication Critical patent/WO2018032841A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to the field of three-dimensional display technology, and in particular, to a method for drawing a three-dimensional image, and an apparatus and system thereof.
  • the three-dimensional display technology generates a three-dimensional effect by respectively receiving the simultaneously acquired binocular images by the corresponding eyes. Since this technology has brought people a new stereoscopic viewing experience, the demand for 3D image resources has increased in recent years.
  • One of the methods for obtaining a three-dimensional image at present is to convert a two-dimensional image into a three-dimensional image by image processing technology. Specifically, the image depth information of the existing two-dimensional image is calculated by using image processing technology, and then other virtual viewpoint images are drawn, and the three-dimensional image is formed by using the existing two-dimensional image and the virtual other viewpoint image.
  • the technical problem mainly solved by the present invention is to provide a method for drawing a three-dimensional image, a device and a system thereof, and capable of improving a three-dimensional display effect.
  • a technical solution adopted by the present invention is to provide a method for drawing a three-dimensional image, comprising: separately acquiring an invisible light image obtained by acquiring a target with a first viewpoint and respectively aiming at the target with a second viewpoint; Performing the first color image obtained by the acquisition; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; and moving the pixel coordinates of the first color image according to the parallax to obtain the first a second color image of the viewpoint; a three-dimensional image is formed by the first color image and the second color image.
  • the invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, where the first color image is obtained by A color camera disposed at the second viewpoint acquires the target.
  • the calculating the disparity between the first view point and the second view point by the invisible light image comprises: calculating the invisible light including the structured light pattern according to a matching algorithm of digital image processing a displacement between the image and each pixel of the preset reference structured light image; a disparity between the first viewpoint and the second viewpoint is calculated from the displacement, wherein the displacement has a linear relationship with the parallax.
  • the disparity between the first view point and the second view point is calculated by the displacement, including: calculating a disparity d between the first view point and the second view point by using Equation 1 below,
  • B 1 is the distance between the invisible image collector and the projection module
  • B 2 is the distance between the invisible image collector and the color camera
  • Z 0 is the reference structured light image The depth of the plane relative to the invisible image collector
  • f is the focal length of the invisible image collector and the color camera
  • ⁇ u is the displacement between the invisible image and the pixels of the preset reference structured light image.
  • the second color image of the first view is obtained by moving the pixel coordinates of the first color image according to the parallax, and the first pixel coordinate I ir of the invisible image is established according to the disparity d (u)
  • the method further includes: calculating, by using the invisible light image, a depth image of the first viewpoint; and using a three-dimensional image transformation theory, calculating the target according to the depth image of the first viewpoint and the first color image a third color image of the first viewpoint;
  • Forming the three-dimensional image from the first color image and the second color image includes: averaging pixel values of corresponding pixels in the second color image and the third color image Or a weighted average to obtain a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line;
  • the invisible light image is an infrared image, and the invisible light image collector is an infrared camera.
  • the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the present invention adopts another technical solution to provide an image processing device, which includes an input interface, a processor, and a memory; the input interface is used to obtain an invisible image collector and a color camera.
  • the memory is used to store a computer program; the processor executes the computer program, respectively, by acquiring, by the input interface, the target of the invisible image collector of the first viewpoint a non-visible light image and a first color image obtained by collecting the target with the color camera of the second viewpoint; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; The parallax moves pixel coordinates of the first color image to obtain a second color image of the first viewpoint; and forms a three-dimensional image from the first color image and the second color image.
  • the present invention adopts another technical solution to provide a three-dimensional image drawing system, including a projection module, an invisible image collector, a color camera, and the invisible image collector and the color camera.
  • An image processing device configured to: respectively acquire an invisible light image obtained by acquiring an object by a non-visible light image collector of a first viewpoint and acquiring the target by using a color camera of a second viewpoint a color image; calculating a disparity between the first view point and the second view point by the invisible light image; moving a pixel coordinate of the first color image according to the disparity to obtain a second color of the first view point An image; a three-dimensional image is formed from the first color image and the second color image.
  • the present invention obtains the parallax of the first viewpoint and the second viewpoint by using the acquired invisible light image of the first viewpoint, and obtains the second color image of the second viewpoint by using the first color image of the second viewpoint and the parallax, and further A color image and a second color image form a three-dimensional image, and since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced, so as to obtain more accurate Color map of two viewpoints
  • the image reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect based on the two-dimensional image generation.
  • the embodiment does not need to calculate the depth information of the image, avoids the error introduced by repeated calculations, and further improves the three-dimensional display effect.
  • FIG. 1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention
  • FIG. 2 is a schematic diagram of an application scenario of a method for drawing a three-dimensional image according to the present invention
  • FIG. 3 is a partial flow chart of another embodiment of a method for drawing a three-dimensional image according to the present invention.
  • FIG. 4 is a partial flow chart of still another embodiment of a method for drawing a three-dimensional image according to the present invention.
  • FIG. 5 is a flow chart of still another embodiment of a method for drawing a three-dimensional image of the present invention.
  • FIG. 6 is a schematic structural view of an embodiment of a three-dimensional image drawing apparatus according to the present invention.
  • FIG. 7 is a schematic structural view of an embodiment of a three-dimensional image rendering system of the present invention.
  • Figure 8 is a block diagram showing another embodiment of the three-dimensional image rendering system of the present invention.
  • FIG. 1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention.
  • the method can be performed by a three-dimensional image rendering device, including the following steps:
  • S11 Acquire an invisible light image obtained by collecting the target with the first viewpoint and a first color image obtained by acquiring the target by the second viewpoint.
  • the invisible light image and the color image according to the present invention are both two-dimensional images.
  • the invisible light image is an image formed by acquiring the intensity of invisible light on the target.
  • the first viewpoint and the second viewpoint are located at different positions of the target to obtain the target An image at two viewpoints.
  • the first viewpoint and the second viewpoint are used as two viewpoints of the eyes of the human body, that is, the positional relationship between the first viewpoint and the second viewpoint is The positional relationship between the eyes of the human body. For example, if the distance between the eyes of the conventional human body is t, the distance between the first viewpoint and the second viewpoint is set to t, which is specifically 6.5 cm.
  • the first view and the second view are set to be the same distance as the target or the distance does not exceed a set threshold.
  • the device The threshold can be set to a value of no more than 10 cm or 20 cm.
  • the invisible light image is a projected light pattern projected onto the target 23 by the projection module 25, and the invisible light image collector 21 disposed at the first viewpoint
  • the target 23 is acquired, and the first color image is acquired by the color camera 22 disposed at the second viewpoint.
  • the invisible light image collector 21 and the color camera transmit the acquired images thereof to the three-dimensional image drawing device 24 to perform acquisition of the following three-dimensional images. Since the position of the color camera and the invisible image collector is different, the spatial three-dimensional points corresponding to the same pixel coordinates in the first color image and the invisible image are not the same.
  • FIG. 2 the invisible light image is a projected light pattern projected onto the target 23 by the projection module 25, and the invisible light image collector 21 disposed at the first viewpoint
  • the target 23 is acquired, and the first color image is acquired by the color camera 22 disposed at the second viewpoint.
  • the invisible light image collector 21 and the color camera transmit the acquired images thereof to the three-dimensional image drawing device 24 to perform acquisition of the following three-dimensional images. Since the position of the color camera and the invisible image
  • the color camera 22 and the invisible light image collector 21 and the projection module 25 are on the same line, so that the color camera 22 and the invisible light image collector 21 and the projection module 25 are The depth of the target is the same.
  • FIG. 2 is only used as an embodiment. In other applications, the above three types may not be on the same line.
  • the projection module 25 is generally composed of a laser and a diffractive optical element.
  • the laser may be an edge-emitting laser or a vertical cavity laser, which is an invisible light that can be collected by the invisible image collector.
  • the diffractive optical element may be configured to have functions such as collimation, splitting, diffusion, etc. according to different structural light patterns.
  • the structured light pattern may be an irregularly distributed speckle pattern, and the speckle center level needs to meet the requirements for harmlessness to the human body. Therefore, it is necessary to comprehensively consider the power of the laser and the arrangement of the diffractive optical element.
  • the intensity of the speckle pattern affects the speed and accuracy of the depth value calculation.
  • the speckle particle density can also be determined by the three-dimensional image rendering device 24 according to its own calculation requirements, and the determined density information is sent to the projection module 25.
  • the projection module 25 is, but is not limited to, projecting the speckle particle pattern at a certain diffusion angle to the target area.
  • the invisible light image collector 21 collects the invisible light image of the target.
  • the invisible light may be any invisible light.
  • the invisible light image collector 21 may be an infrared collector, such as an infrared camera, and the invisible image is an infrared image; or the invisible image collector 21 may be an ultraviolet collector.
  • the invisible image is an ultraviolet image.
  • the color camera and the invisible image collector can be set to be synchronously acquired and the number of acquisition frames is the same, so that the obtained color image and the invisible image can ensure a one-to-one correspondence. Easy for subsequent calculations.
  • S12 Calculate a disparity between the first view point and the second view point from the invisible light image.
  • a matching algorithm such as a digital image correlation (DIC) algorithm using digital image processing calculates a parallax between an image of the first viewpoint and an image of the second viewpoint, that is, an image of the first viewpoint and a pixel of the second viewpoint image. The relative positional relationship between the coordinates.
  • DIC digital image correlation
  • the pixel coordinates of the first color image are shifted by the image disparity value d corresponding to the respective pixels, wherein the pixel values (also referred to as RGB values) of the obtained pixel coordinates (u 1 +d, v 1 ) are The pixel value of the pixel coordinates (u 1 , v 1 ) in a color image.
  • the first color image and the second color image are respectively used as a human body binocular image to synthesize a three-dimensional image, and specifically may be a three-dimensional image for 3D display in a top-bottom format, a left-right format, or a red-blue format. Further, after the three-dimensional image is synthesized, the three-dimensional image may also be displayed or output to a connected external display device for display.
  • the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible light image of the first viewpoint
  • the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax.
  • the first color image and the second color image are formed into a three-dimensional image. Since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced. More accurate access to two viewpoints
  • the color image reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect generated based on the two-dimensional image.
  • DIBR depth-image-based rendering
  • the invisible light image is a projected light pattern projected onto the target by the projection module, and the target is collected by an invisible light image collector disposed at the first viewpoint. It is obtained that the first color image is obtained by collecting the target by a color camera disposed at the second viewpoint.
  • the foregoing S12 includes the following sub-steps:
  • S121 Calculate a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image according to a matching algorithm of the digital image processing.
  • the matching algorithm of the digital image processing is a digital image correlation algorithm.
  • the reference structured light pattern is obtained by previously projecting a reference structured light pattern onto a plane of a set distance by using the set projection module, and acquiring the reference structured light pattern of the plane by using the set invisible light image collector,
  • the "set up" should be understood as that the image collector and the projection module are not moved when the invisible image is subsequently acquired after being set.
  • a digital image correlation algorithm is used to obtain a displacement value ⁇ u of each corresponding pixel between the invisible light image and the reference structured light pattern such as the reference speckle image.
  • the measurement accuracy of the digital image correlation algorithm can reach sub-pixel level, such as 1/8 pixel, that is, the value of ⁇ u will be a multiple of 1/8, and the unit is pixel.
  • the displacement between the invisible image and each pixel of the reference structured light image has a linear relationship with the parallax. Therefore, the disparity between the first viewpoint and the second viewpoint can be calculated by the displacement and its linear relationship.
  • the parallax d between the first viewpoint and the second viewpoint is calculated by the following formula 11,
  • B 1 is the distance between the invisible image collector and the projection module
  • B 2 is the distance between the invisible image collector and the color camera
  • Z 0 is the reference structured light image
  • f is the focal length of the invisible image collector and the color camera
  • ⁇ u is the displacement between the invisible image and the pixels of the preset reference structured light image.
  • the plane of the reference structured light image is the plane on which the reference structured light pattern is projected, and the Z 0 is used to indicate the distance between the plane and the image collector, which can be used when the reference structured light image is previously tested. Distance information is obtained.
  • the unit of f is a pixel, and the value of f can be obtained by calibration in advance.
  • the calculated value of the parallax d is not an integer, it may be rounded or rounded.
  • the above 13 includes the following sub-steps:
  • S131 Establish a correspondence between a first pixel coordinate of the invisible light image and a second pixel coordinate of the first color image according to a parallax.
  • a pixel value (also referred to as an RGB value) of the first color image is assigned to the invisible light image according to the correspondence relationship to generate a second color image.
  • the pixel coordinates (1, 1) of the invisible light image correspond to the pixel coordinates (2, 1) of the first color image.
  • the pixel value of the pixel coordinate (1, 1) of the invisible light image is set as the pixel value (r, g, b) of the pixel coordinate (2, 1) in the first color image.
  • S133 Perform smoothing and denoising processing on the second color image.
  • the sub-step performs denoising and smoothing on the obtained second color image.
  • the foregoing step S13 may include only the foregoing S131 and S132. Substeps.
  • the depth image of the first viewpoint is calculated by using an infrared image, and the specific calculation manner may adopt an existing correlation algorithm.
  • S16 Calculate a third color image of the target at the first viewpoint according to the depth image of the first viewpoint and the first color image by using a three-dimensional image transformation theory.
  • any three-dimensional coordinate point in space and two-dimensional coordinate points on the image acquisition plane can be related by the theory of transmission transformation, so the theory can be the first viewpoint and the second viewpoint.
  • the pixel coordinates of the image are associated, and the pixel value of the corresponding pixel coordinate in the first color image of the second viewpoint is set for the image pixel coordinates of the first viewpoint according to the correspondence relationship and the pixel value of the first color image of the second viewpoint.
  • the S16 includes the following substeps:
  • the Z D is depth information in the first depth image, indicating a depth value of the target distance from the depth camera; and Z R represents a depth value of the target distance from the color camera; a pixel homogeneous coordinate on an image coordinate system of the color camera; The homogeneous coordinates for the pixel in the depth image coordinate system of the camera; M g is the internal reference matrix color camera, M D is the matrix of intrinsic depth camera; R & lt depth camera is a color camera with respect to the external reference matrix In the rotation matrix, T is the translation matrix in the outer parameter matrix of the depth camera relative to the color camera.
  • the internal reference matrix and the external parameter matrix of the camera and the collector may be preset, and the internal reference matrix may be calculated according to setting parameters of the camera and the collector, and the external reference matrix may be between the invisible image collector and the color camera.
  • the positional relationship is determined.
  • the internal parameter matrix formed by the pixel focal length of the image capture lens of the camera and the collector and the central position coordinates of the image acquisition target surface. Since the positional relationship between the first viewpoint and the second viewpoint is set to the positional relationship of the eyes of the human eye, there is no relative rotation between the eyes of the human body and only the distance of the set value t, so the rotation of the color camera with respect to the invisible image collector
  • the matrix R is an identity matrix
  • the translation matrix T [t, 0, 0] -1 .
  • the set value t can be adjusted according to the distance between the invisible light image collector and the color camera and the target.
  • the method further includes: acquiring a distance between the target and the invisible image collector and the color camera; and determining a distance between the target and the invisible image collector and the color camera When the value is greater than the first distance value, the set value t is increased; when it is determined that the distance between the target and the invisible image collector and the color camera is less than the second distance value, the setting is The setting value t is small.
  • the first distance value is greater than or equal to the second distance value.
  • the distance between the target and the invisible image collector is 100 cm
  • the distance between the target and the color camera is also 100 cm
  • the set value is reduced by one step value, or according to the current target.
  • the distance between the invisible image collector and the color camera is calculated and adjusted.
  • the distance between the target and the invisible image collector and the color camera is 300 cm, since the 300 cm is larger than the second distance value 200 and smaller than the first distance value of 500 cm, the set value is not adjusted.
  • the depth information Z D of the invisible light image of the first viewpoint into the above formula 12 the depth information of the second viewpoint on the left side of the formula 12, that is, the depth information Z R of the first color image, and the first color can be obtained.
  • Pixel homogeneous coordinates of the image coordinate system of the image the invisible light image collector and the color camera are at the same distance from the target, that is, the obtained Z R and Z D are equal.
  • the foregoing S14 includes the following steps:
  • S141 Average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a fourth color image of the first view.
  • the pixel values of the pixel coordinates (Ur, Vr) in the second color image and the third color image are (r1, g1, b1) and (r2, g2, b2), respectively. Setting the pixel value of the pixel coordinates (Ur, Vr) in the fourth color image of the first viewpoint to
  • S142 Form a three-dimensional image from the first color image and the fourth color image.
  • the first color image and the fourth color image are respectively used as a human binocular image to synthesize a three-dimensional image.
  • the image acquisition target surface of the invisible light image collector and the color camera may be set to be equal in size, the resolution is the same, and the focal length is the same.
  • the color camera and the invisible image collector have different image acquisition target sizes, resolutions, and focal lengths, for example, the color camera has a larger target size and resolution than the invisible image collector.
  • the image acquisition target surface of the invisible image collector and the color camera is equal in size, the resolution is the same, and the focal length is the same: the invisible image collector
  • the image acquisition target size, resolution, and focal length of the color camera are the same within the tolerance range.
  • the image includes a photo or a video
  • the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed. Interpolation is used to obtain video images of consistent frequency.
  • FIG. 6 is a schematic structural diagram of an embodiment of a three-dimensional image drawing apparatus according to the present invention.
  • the drawing device 60 includes an obtaining module 61, a calculating module 62, a forming module 63, and a getting module 64. among them,
  • the acquiring module 61 is configured to respectively acquire an invisible light image obtained by acquiring the target by the first viewpoint and a first color image obtained by collecting the target by the second viewpoint;
  • the calculating module 62 is configured to calculate a disparity between the first view point and the second view point from the invisible light image
  • the obtaining module 64 is configured to move the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view point;
  • the forming module 63 is configured to form a three-dimensional image from the first color image and the second color image.
  • the invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, the first color The image is acquired by the color camera disposed at the second viewpoint.
  • the calculating module 62 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image; The displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
  • the calculating module 62 performs the disparity calculation between the first viewpoint and the second viewpoint by the displacement calculation, including: calculating, between the first viewpoint and the second viewpoint, by using the above formula 11 Parallax d.
  • the obtaining module 64 is specifically configured to establish, according to the disparity d, a first pixel coordinate I ir (u ir , v ir ) of the invisible light image and a second pixel coordinate I r (u of the first color image)
  • the calculating module 62 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image.
  • the forming module 63 is configured to average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line;
  • the invisible light image is an infrared image, and the invisible light image collector For infrared cameras.
  • the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the invisible light image and the first color image are photos or videos
  • the invisible light image collector and the color camera are collected when the invisible light image and the first color image are video.
  • Frequency synchronization or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, a video image of the same frequency is obtained by image interpolation.
  • FIG. 7 is a schematic structural diagram of an embodiment of a three-dimensional image rendering system according to the present invention.
  • the system 70 includes a projection module 74, an invisible image collector 71, a color camera 72, and an image processing device 73 connected to the invisible image collector 71 and the color camera 72.
  • the image processing device 73 includes an input interface 731, a processor 732, and a memory 733. Further, the image processing device 73 can also be connected to the projection module 74.
  • the input interface 731 is used to obtain images acquired by the invisible image collector 71 and the color camera 72.
  • the memory 733 is used to store a computer program and provide the computer program to the processor 732, and can store data used by the processor 732 for processing such as the internal parameter matrix and the external parameter matrix of the invisible light image collector 71 and the color camera 72. And the image obtained by the input interface 731.
  • the processor 732 is used to:
  • a three-dimensional image is formed from the first color image and the second color image.
  • the image processing device 73 may further include a display screen 734 for displaying the three-dimensional image to implement three-dimensional display.
  • the image processing device 73 is not used to display the three-dimensional image.
  • the three-dimensional image rendering system 70 further A display device 75 connected to the image processing device 73 for receiving a three-dimensional image output by the image processing device 73 and displaying the three-dimensional image is included.
  • the processor 732 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image;
  • the displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
  • the processor 732 performs the disparity calculation between the first view point and the second view point by the displacement calculation, including: calculating the first view point and the second view point by using Equation 1 below Parallax d between.
  • the processor 732 performs the moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view, including: establishing a first image of the invisible image according to the disparity d
  • the processor 732 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera 72 and the invisible image collector 71 and the projection module 74 are in the same On the straight line; the invisible light image is an infrared image, and the invisible light image collector 71 is an infrared camera.
  • the color camera 72 and the invisible light image collector 71 have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the invisible light image and the first color image are photos or videos, when When the invisible light image and the first color image are video, the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed Interpolation is used to obtain video images of consistent frequency.
  • the image processing device 73 can be used as the above-described three-dimensional image drawing device for executing the method described in the above embodiments.
  • the method disclosed in the above embodiments of the present invention may also be applied to the processor 732 or implemented by the processor 732.
  • Processor 732 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 732 or an instruction in a form of software.
  • the processor 732 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 733, and the processor 732 reads the information in the corresponding memory and completes the steps of the above method in combination with the hardware thereof.
  • the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible image of the first viewpoint
  • the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax
  • further Forming a three-dimensional image from the first color image and the second color image since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, thereby reducing the loss of image detail information, thereby Accurately obtaining color images of two viewpoints, thereby reducing the distortion of the synthesized three-dimensional image and improving the three-dimensional display effect generated based on the two-dimensional image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé, un dispositif et un système destinés à tracer une image tridimensionnelle. Le procédé comporte les étapes consistant à: obtenir séparément une image en lumière invisible obtenue en capturant une cible à partir d'un premier point de vue et une première image en couleurs obtenue en capturant la cible à partir d'un deuxième point de vue; calculer une parallaxe entre le premier point de vue et le deuxième point de vue en utilisant l'image en lumière invisible; déplacer des coordonnées de pixels de la première image en couleurs d'après la parallaxe pour obtenir une deuxième image en couleurs du premier point de vue; et former une image tridimensionnelle en utilisant la première image en couleurs et la deuxième image en couleurs. Le procédé peut améliorer un effet d'affichage tridimensionnel.
PCT/CN2017/085147 2016-08-19 2017-05-19 Procédé, dispositif et système de tracé d'image tridimensionnelle WO2018032841A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610698004.0 2016-08-19
CN201610698004.0A CN106170086B (zh) 2016-08-19 2016-08-19 绘制三维图像的方法及其装置、系统

Publications (1)

Publication Number Publication Date
WO2018032841A1 true WO2018032841A1 (fr) 2018-02-22

Family

ID=57375861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085147 WO2018032841A1 (fr) 2016-08-19 2017-05-19 Procédé, dispositif et système de tracé d'image tridimensionnelle

Country Status (2)

Country Link
CN (1) CN106170086B (fr)
WO (1) WO2018032841A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170086B (zh) * 2016-08-19 2019-03-15 深圳奥比中光科技有限公司 绘制三维图像的方法及其装置、系统
CN106875435B (zh) * 2016-12-14 2021-04-30 奥比中光科技集团股份有限公司 获取深度图像的方法及系统
CN107105217B (zh) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 多模式深度计算处理器以及3d图像设备
CN108460368B (zh) * 2018-03-30 2021-07-09 百度在线网络技术(北京)有限公司 三维图像合成方法、装置及计算机可读存储介质
CN113436129B (zh) * 2021-08-24 2021-11-16 南京微纳科技研究院有限公司 图像融合系统、方法、装置、设备及存储介质
CN114119680B (zh) * 2021-09-09 2022-09-20 合肥的卢深视科技有限公司 图像获取方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060114320A1 (en) * 2004-11-30 2006-06-01 Honda Motor Co. Ltd. Position detecting apparatus and method of correcting data therein
CN102999939A (zh) * 2012-09-21 2013-03-27 魏益群 坐标获取装置、实时三维重建系统和方法、立体交互设备
CN104185006A (zh) * 2013-05-24 2014-12-03 索尼公司 成像设备以及成像方法
CN104918035A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种获取目标三维图像的方法及系统
CN106170086A (zh) * 2016-08-19 2016-11-30 深圳奥比中光科技有限公司 绘制三维图像的方法及其装置、系统
CN106604020A (zh) * 2016-11-24 2017-04-26 深圳奥比中光科技有限公司 一种用于3d显示的专用处理器
CN106791763A (zh) * 2016-11-24 2017-05-31 深圳奥比中光科技有限公司 一种用于3d显示和3d交互的专用处理器

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502372B1 (ko) * 2008-11-26 2015-03-16 삼성전자주식회사 영상 획득 장치 및 방법
CN101662695B (zh) * 2009-09-24 2011-06-15 清华大学 一种获取虚拟视图的方法和装置
US9406132B2 (en) * 2010-07-16 2016-08-02 Qualcomm Incorporated Vision-based quality metric for three dimensional video
CN102289841B (zh) * 2011-08-11 2013-01-16 四川虹微技术有限公司 一种立体图像观众感知深度的调节方法
WO2014002849A1 (fr) * 2012-06-29 2014-01-03 富士フイルム株式会社 Procédé, appareil et systèmes de mesure tridimensionnelle, et dispositif de traitement d'image
KR101904718B1 (ko) * 2012-08-27 2018-10-05 삼성전자주식회사 컬러 영상 및 깊이 영상 촬영 장치 및 방법
US10517483B2 (en) * 2012-12-05 2019-12-31 Accuvein, Inc. System for detecting fluorescence and projecting a representative image
CN103796004B (zh) * 2014-02-13 2015-09-30 西安交通大学 一种主动结构光的双目深度感知方法
CN103824318B (zh) * 2014-02-13 2016-11-23 西安交通大学 一种多摄像头阵列的深度感知方法
CN105791662A (zh) * 2014-12-22 2016-07-20 联想(北京)有限公司 电子设备和控制方法
CN105120257B (zh) * 2015-08-18 2017-12-15 宁波盈芯信息科技有限公司 一种基于结构光编码的垂直深度感知装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060114320A1 (en) * 2004-11-30 2006-06-01 Honda Motor Co. Ltd. Position detecting apparatus and method of correcting data therein
CN102999939A (zh) * 2012-09-21 2013-03-27 魏益群 坐标获取装置、实时三维重建系统和方法、立体交互设备
CN104185006A (zh) * 2013-05-24 2014-12-03 索尼公司 成像设备以及成像方法
CN104918035A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种获取目标三维图像的方法及系统
CN106170086A (zh) * 2016-08-19 2016-11-30 深圳奥比中光科技有限公司 绘制三维图像的方法及其装置、系统
CN106604020A (zh) * 2016-11-24 2017-04-26 深圳奥比中光科技有限公司 一种用于3d显示的专用处理器
CN106791763A (zh) * 2016-11-24 2017-05-31 深圳奥比中光科技有限公司 一种用于3d显示和3d交互的专用处理器

Also Published As

Publication number Publication date
CN106170086A (zh) 2016-11-30
CN106170086B (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
US11354840B2 (en) Three dimensional acquisition and rendering
WO2018032841A1 (fr) Procédé, dispositif et système de tracé d'image tridimensionnelle
CN106254854B (zh) 三维图像的获得方法、装置及系统
JP5887267B2 (ja) 3次元画像補間装置、3次元撮像装置および3次元画像補間方法
WO2019100933A1 (fr) Procédé, dispositif et système de mesure en trois dimensions
CN101630406B (zh) 摄像机的标定方法及摄像机标定装置
US8736672B2 (en) Algorithmic interaxial reduction
TWI591584B (zh) 三維感測方法與三維感測裝置
CN103839227B (zh) 鱼眼图像校正方法和装置
US9615081B2 (en) Method and multi-camera portable device for producing stereo images
CN110827392B (zh) 单目图像三维重建方法、系统及装置
JP2010113720A (ja) 距離情報を光学像と組み合わせる方法及び装置
TWI788739B (zh) 3d顯示設備、3d圖像顯示方法
CN104599317A (zh) 一种实现3d扫描建模功能的移动终端及方法
KR101853269B1 (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 장치
Krutikova et al. Creation of a depth map from stereo images of faces for 3D model reconstruction
KR20190044439A (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 방법
TWI820246B (zh) 具有像差估計之設備、估計來自廣角影像的像差之方法及電腦程式產品
CN106331672B (zh) 视点图像的获得方法、装置及系统
KR20110025083A (ko) 입체 영상 시스템에서 입체 영상 디스플레이 장치 및 방법
CN104463958A (zh) 基于视差图融合的三维超分辨率方法
Chu Video stabilization for stereoscopic 3D on 3D mobile devices
KR101358432B1 (ko) 디스플레이 장치 및 방법
TWI725620B (zh) 全向立體視覺的相機配置系統及相機配置方法
JP2024062935A (ja) 立体視表示コンテンツを生成する方法および装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17840817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17840817

Country of ref document: EP

Kind code of ref document: A1