WO2018032841A1 - Method, device and system for drawing three-dimensional image - Google Patents

Method, device and system for drawing three-dimensional image Download PDF

Info

Publication number
WO2018032841A1
WO2018032841A1 PCT/CN2017/085147 CN2017085147W WO2018032841A1 WO 2018032841 A1 WO2018032841 A1 WO 2018032841A1 CN 2017085147 W CN2017085147 W CN 2017085147W WO 2018032841 A1 WO2018032841 A1 WO 2018032841A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
viewpoint
color
invisible
color image
Prior art date
Application number
PCT/CN2017/085147
Other languages
French (fr)
Chinese (zh)
Inventor
黄源浩
肖振中
刘龙
许星
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2018032841A1 publication Critical patent/WO2018032841A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to the field of three-dimensional display technology, and in particular, to a method for drawing a three-dimensional image, and an apparatus and system thereof.
  • the three-dimensional display technology generates a three-dimensional effect by respectively receiving the simultaneously acquired binocular images by the corresponding eyes. Since this technology has brought people a new stereoscopic viewing experience, the demand for 3D image resources has increased in recent years.
  • One of the methods for obtaining a three-dimensional image at present is to convert a two-dimensional image into a three-dimensional image by image processing technology. Specifically, the image depth information of the existing two-dimensional image is calculated by using image processing technology, and then other virtual viewpoint images are drawn, and the three-dimensional image is formed by using the existing two-dimensional image and the virtual other viewpoint image.
  • the technical problem mainly solved by the present invention is to provide a method for drawing a three-dimensional image, a device and a system thereof, and capable of improving a three-dimensional display effect.
  • a technical solution adopted by the present invention is to provide a method for drawing a three-dimensional image, comprising: separately acquiring an invisible light image obtained by acquiring a target with a first viewpoint and respectively aiming at the target with a second viewpoint; Performing the first color image obtained by the acquisition; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; and moving the pixel coordinates of the first color image according to the parallax to obtain the first a second color image of the viewpoint; a three-dimensional image is formed by the first color image and the second color image.
  • the invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, where the first color image is obtained by A color camera disposed at the second viewpoint acquires the target.
  • the calculating the disparity between the first view point and the second view point by the invisible light image comprises: calculating the invisible light including the structured light pattern according to a matching algorithm of digital image processing a displacement between the image and each pixel of the preset reference structured light image; a disparity between the first viewpoint and the second viewpoint is calculated from the displacement, wherein the displacement has a linear relationship with the parallax.
  • the disparity between the first view point and the second view point is calculated by the displacement, including: calculating a disparity d between the first view point and the second view point by using Equation 1 below,
  • B 1 is the distance between the invisible image collector and the projection module
  • B 2 is the distance between the invisible image collector and the color camera
  • Z 0 is the reference structured light image The depth of the plane relative to the invisible image collector
  • f is the focal length of the invisible image collector and the color camera
  • ⁇ u is the displacement between the invisible image and the pixels of the preset reference structured light image.
  • the second color image of the first view is obtained by moving the pixel coordinates of the first color image according to the parallax, and the first pixel coordinate I ir of the invisible image is established according to the disparity d (u)
  • the method further includes: calculating, by using the invisible light image, a depth image of the first viewpoint; and using a three-dimensional image transformation theory, calculating the target according to the depth image of the first viewpoint and the first color image a third color image of the first viewpoint;
  • Forming the three-dimensional image from the first color image and the second color image includes: averaging pixel values of corresponding pixels in the second color image and the third color image Or a weighted average to obtain a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line;
  • the invisible light image is an infrared image, and the invisible light image collector is an infrared camera.
  • the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the present invention adopts another technical solution to provide an image processing device, which includes an input interface, a processor, and a memory; the input interface is used to obtain an invisible image collector and a color camera.
  • the memory is used to store a computer program; the processor executes the computer program, respectively, by acquiring, by the input interface, the target of the invisible image collector of the first viewpoint a non-visible light image and a first color image obtained by collecting the target with the color camera of the second viewpoint; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; The parallax moves pixel coordinates of the first color image to obtain a second color image of the first viewpoint; and forms a three-dimensional image from the first color image and the second color image.
  • the present invention adopts another technical solution to provide a three-dimensional image drawing system, including a projection module, an invisible image collector, a color camera, and the invisible image collector and the color camera.
  • An image processing device configured to: respectively acquire an invisible light image obtained by acquiring an object by a non-visible light image collector of a first viewpoint and acquiring the target by using a color camera of a second viewpoint a color image; calculating a disparity between the first view point and the second view point by the invisible light image; moving a pixel coordinate of the first color image according to the disparity to obtain a second color of the first view point An image; a three-dimensional image is formed from the first color image and the second color image.
  • the present invention obtains the parallax of the first viewpoint and the second viewpoint by using the acquired invisible light image of the first viewpoint, and obtains the second color image of the second viewpoint by using the first color image of the second viewpoint and the parallax, and further A color image and a second color image form a three-dimensional image, and since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced, so as to obtain more accurate Color map of two viewpoints
  • the image reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect based on the two-dimensional image generation.
  • the embodiment does not need to calculate the depth information of the image, avoids the error introduced by repeated calculations, and further improves the three-dimensional display effect.
  • FIG. 1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention
  • FIG. 2 is a schematic diagram of an application scenario of a method for drawing a three-dimensional image according to the present invention
  • FIG. 3 is a partial flow chart of another embodiment of a method for drawing a three-dimensional image according to the present invention.
  • FIG. 4 is a partial flow chart of still another embodiment of a method for drawing a three-dimensional image according to the present invention.
  • FIG. 5 is a flow chart of still another embodiment of a method for drawing a three-dimensional image of the present invention.
  • FIG. 6 is a schematic structural view of an embodiment of a three-dimensional image drawing apparatus according to the present invention.
  • FIG. 7 is a schematic structural view of an embodiment of a three-dimensional image rendering system of the present invention.
  • Figure 8 is a block diagram showing another embodiment of the three-dimensional image rendering system of the present invention.
  • FIG. 1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention.
  • the method can be performed by a three-dimensional image rendering device, including the following steps:
  • S11 Acquire an invisible light image obtained by collecting the target with the first viewpoint and a first color image obtained by acquiring the target by the second viewpoint.
  • the invisible light image and the color image according to the present invention are both two-dimensional images.
  • the invisible light image is an image formed by acquiring the intensity of invisible light on the target.
  • the first viewpoint and the second viewpoint are located at different positions of the target to obtain the target An image at two viewpoints.
  • the first viewpoint and the second viewpoint are used as two viewpoints of the eyes of the human body, that is, the positional relationship between the first viewpoint and the second viewpoint is The positional relationship between the eyes of the human body. For example, if the distance between the eyes of the conventional human body is t, the distance between the first viewpoint and the second viewpoint is set to t, which is specifically 6.5 cm.
  • the first view and the second view are set to be the same distance as the target or the distance does not exceed a set threshold.
  • the device The threshold can be set to a value of no more than 10 cm or 20 cm.
  • the invisible light image is a projected light pattern projected onto the target 23 by the projection module 25, and the invisible light image collector 21 disposed at the first viewpoint
  • the target 23 is acquired, and the first color image is acquired by the color camera 22 disposed at the second viewpoint.
  • the invisible light image collector 21 and the color camera transmit the acquired images thereof to the three-dimensional image drawing device 24 to perform acquisition of the following three-dimensional images. Since the position of the color camera and the invisible image collector is different, the spatial three-dimensional points corresponding to the same pixel coordinates in the first color image and the invisible image are not the same.
  • FIG. 2 the invisible light image is a projected light pattern projected onto the target 23 by the projection module 25, and the invisible light image collector 21 disposed at the first viewpoint
  • the target 23 is acquired, and the first color image is acquired by the color camera 22 disposed at the second viewpoint.
  • the invisible light image collector 21 and the color camera transmit the acquired images thereof to the three-dimensional image drawing device 24 to perform acquisition of the following three-dimensional images. Since the position of the color camera and the invisible image
  • the color camera 22 and the invisible light image collector 21 and the projection module 25 are on the same line, so that the color camera 22 and the invisible light image collector 21 and the projection module 25 are The depth of the target is the same.
  • FIG. 2 is only used as an embodiment. In other applications, the above three types may not be on the same line.
  • the projection module 25 is generally composed of a laser and a diffractive optical element.
  • the laser may be an edge-emitting laser or a vertical cavity laser, which is an invisible light that can be collected by the invisible image collector.
  • the diffractive optical element may be configured to have functions such as collimation, splitting, diffusion, etc. according to different structural light patterns.
  • the structured light pattern may be an irregularly distributed speckle pattern, and the speckle center level needs to meet the requirements for harmlessness to the human body. Therefore, it is necessary to comprehensively consider the power of the laser and the arrangement of the diffractive optical element.
  • the intensity of the speckle pattern affects the speed and accuracy of the depth value calculation.
  • the speckle particle density can also be determined by the three-dimensional image rendering device 24 according to its own calculation requirements, and the determined density information is sent to the projection module 25.
  • the projection module 25 is, but is not limited to, projecting the speckle particle pattern at a certain diffusion angle to the target area.
  • the invisible light image collector 21 collects the invisible light image of the target.
  • the invisible light may be any invisible light.
  • the invisible light image collector 21 may be an infrared collector, such as an infrared camera, and the invisible image is an infrared image; or the invisible image collector 21 may be an ultraviolet collector.
  • the invisible image is an ultraviolet image.
  • the color camera and the invisible image collector can be set to be synchronously acquired and the number of acquisition frames is the same, so that the obtained color image and the invisible image can ensure a one-to-one correspondence. Easy for subsequent calculations.
  • S12 Calculate a disparity between the first view point and the second view point from the invisible light image.
  • a matching algorithm such as a digital image correlation (DIC) algorithm using digital image processing calculates a parallax between an image of the first viewpoint and an image of the second viewpoint, that is, an image of the first viewpoint and a pixel of the second viewpoint image. The relative positional relationship between the coordinates.
  • DIC digital image correlation
  • the pixel coordinates of the first color image are shifted by the image disparity value d corresponding to the respective pixels, wherein the pixel values (also referred to as RGB values) of the obtained pixel coordinates (u 1 +d, v 1 ) are The pixel value of the pixel coordinates (u 1 , v 1 ) in a color image.
  • the first color image and the second color image are respectively used as a human body binocular image to synthesize a three-dimensional image, and specifically may be a three-dimensional image for 3D display in a top-bottom format, a left-right format, or a red-blue format. Further, after the three-dimensional image is synthesized, the three-dimensional image may also be displayed or output to a connected external display device for display.
  • the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible light image of the first viewpoint
  • the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax.
  • the first color image and the second color image are formed into a three-dimensional image. Since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced. More accurate access to two viewpoints
  • the color image reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect generated based on the two-dimensional image.
  • DIBR depth-image-based rendering
  • the invisible light image is a projected light pattern projected onto the target by the projection module, and the target is collected by an invisible light image collector disposed at the first viewpoint. It is obtained that the first color image is obtained by collecting the target by a color camera disposed at the second viewpoint.
  • the foregoing S12 includes the following sub-steps:
  • S121 Calculate a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image according to a matching algorithm of the digital image processing.
  • the matching algorithm of the digital image processing is a digital image correlation algorithm.
  • the reference structured light pattern is obtained by previously projecting a reference structured light pattern onto a plane of a set distance by using the set projection module, and acquiring the reference structured light pattern of the plane by using the set invisible light image collector,
  • the "set up" should be understood as that the image collector and the projection module are not moved when the invisible image is subsequently acquired after being set.
  • a digital image correlation algorithm is used to obtain a displacement value ⁇ u of each corresponding pixel between the invisible light image and the reference structured light pattern such as the reference speckle image.
  • the measurement accuracy of the digital image correlation algorithm can reach sub-pixel level, such as 1/8 pixel, that is, the value of ⁇ u will be a multiple of 1/8, and the unit is pixel.
  • the displacement between the invisible image and each pixel of the reference structured light image has a linear relationship with the parallax. Therefore, the disparity between the first viewpoint and the second viewpoint can be calculated by the displacement and its linear relationship.
  • the parallax d between the first viewpoint and the second viewpoint is calculated by the following formula 11,
  • B 1 is the distance between the invisible image collector and the projection module
  • B 2 is the distance between the invisible image collector and the color camera
  • Z 0 is the reference structured light image
  • f is the focal length of the invisible image collector and the color camera
  • ⁇ u is the displacement between the invisible image and the pixels of the preset reference structured light image.
  • the plane of the reference structured light image is the plane on which the reference structured light pattern is projected, and the Z 0 is used to indicate the distance between the plane and the image collector, which can be used when the reference structured light image is previously tested. Distance information is obtained.
  • the unit of f is a pixel, and the value of f can be obtained by calibration in advance.
  • the calculated value of the parallax d is not an integer, it may be rounded or rounded.
  • the above 13 includes the following sub-steps:
  • S131 Establish a correspondence between a first pixel coordinate of the invisible light image and a second pixel coordinate of the first color image according to a parallax.
  • a pixel value (also referred to as an RGB value) of the first color image is assigned to the invisible light image according to the correspondence relationship to generate a second color image.
  • the pixel coordinates (1, 1) of the invisible light image correspond to the pixel coordinates (2, 1) of the first color image.
  • the pixel value of the pixel coordinate (1, 1) of the invisible light image is set as the pixel value (r, g, b) of the pixel coordinate (2, 1) in the first color image.
  • S133 Perform smoothing and denoising processing on the second color image.
  • the sub-step performs denoising and smoothing on the obtained second color image.
  • the foregoing step S13 may include only the foregoing S131 and S132. Substeps.
  • the depth image of the first viewpoint is calculated by using an infrared image, and the specific calculation manner may adopt an existing correlation algorithm.
  • S16 Calculate a third color image of the target at the first viewpoint according to the depth image of the first viewpoint and the first color image by using a three-dimensional image transformation theory.
  • any three-dimensional coordinate point in space and two-dimensional coordinate points on the image acquisition plane can be related by the theory of transmission transformation, so the theory can be the first viewpoint and the second viewpoint.
  • the pixel coordinates of the image are associated, and the pixel value of the corresponding pixel coordinate in the first color image of the second viewpoint is set for the image pixel coordinates of the first viewpoint according to the correspondence relationship and the pixel value of the first color image of the second viewpoint.
  • the S16 includes the following substeps:
  • the Z D is depth information in the first depth image, indicating a depth value of the target distance from the depth camera; and Z R represents a depth value of the target distance from the color camera; a pixel homogeneous coordinate on an image coordinate system of the color camera; The homogeneous coordinates for the pixel in the depth image coordinate system of the camera; M g is the internal reference matrix color camera, M D is the matrix of intrinsic depth camera; R & lt depth camera is a color camera with respect to the external reference matrix In the rotation matrix, T is the translation matrix in the outer parameter matrix of the depth camera relative to the color camera.
  • the internal reference matrix and the external parameter matrix of the camera and the collector may be preset, and the internal reference matrix may be calculated according to setting parameters of the camera and the collector, and the external reference matrix may be between the invisible image collector and the color camera.
  • the positional relationship is determined.
  • the internal parameter matrix formed by the pixel focal length of the image capture lens of the camera and the collector and the central position coordinates of the image acquisition target surface. Since the positional relationship between the first viewpoint and the second viewpoint is set to the positional relationship of the eyes of the human eye, there is no relative rotation between the eyes of the human body and only the distance of the set value t, so the rotation of the color camera with respect to the invisible image collector
  • the matrix R is an identity matrix
  • the translation matrix T [t, 0, 0] -1 .
  • the set value t can be adjusted according to the distance between the invisible light image collector and the color camera and the target.
  • the method further includes: acquiring a distance between the target and the invisible image collector and the color camera; and determining a distance between the target and the invisible image collector and the color camera When the value is greater than the first distance value, the set value t is increased; when it is determined that the distance between the target and the invisible image collector and the color camera is less than the second distance value, the setting is The setting value t is small.
  • the first distance value is greater than or equal to the second distance value.
  • the distance between the target and the invisible image collector is 100 cm
  • the distance between the target and the color camera is also 100 cm
  • the set value is reduced by one step value, or according to the current target.
  • the distance between the invisible image collector and the color camera is calculated and adjusted.
  • the distance between the target and the invisible image collector and the color camera is 300 cm, since the 300 cm is larger than the second distance value 200 and smaller than the first distance value of 500 cm, the set value is not adjusted.
  • the depth information Z D of the invisible light image of the first viewpoint into the above formula 12 the depth information of the second viewpoint on the left side of the formula 12, that is, the depth information Z R of the first color image, and the first color can be obtained.
  • Pixel homogeneous coordinates of the image coordinate system of the image the invisible light image collector and the color camera are at the same distance from the target, that is, the obtained Z R and Z D are equal.
  • the foregoing S14 includes the following steps:
  • S141 Average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a fourth color image of the first view.
  • the pixel values of the pixel coordinates (Ur, Vr) in the second color image and the third color image are (r1, g1, b1) and (r2, g2, b2), respectively. Setting the pixel value of the pixel coordinates (Ur, Vr) in the fourth color image of the first viewpoint to
  • S142 Form a three-dimensional image from the first color image and the fourth color image.
  • the first color image and the fourth color image are respectively used as a human binocular image to synthesize a three-dimensional image.
  • the image acquisition target surface of the invisible light image collector and the color camera may be set to be equal in size, the resolution is the same, and the focal length is the same.
  • the color camera and the invisible image collector have different image acquisition target sizes, resolutions, and focal lengths, for example, the color camera has a larger target size and resolution than the invisible image collector.
  • the image acquisition target surface of the invisible image collector and the color camera is equal in size, the resolution is the same, and the focal length is the same: the invisible image collector
  • the image acquisition target size, resolution, and focal length of the color camera are the same within the tolerance range.
  • the image includes a photo or a video
  • the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed. Interpolation is used to obtain video images of consistent frequency.
  • FIG. 6 is a schematic structural diagram of an embodiment of a three-dimensional image drawing apparatus according to the present invention.
  • the drawing device 60 includes an obtaining module 61, a calculating module 62, a forming module 63, and a getting module 64. among them,
  • the acquiring module 61 is configured to respectively acquire an invisible light image obtained by acquiring the target by the first viewpoint and a first color image obtained by collecting the target by the second viewpoint;
  • the calculating module 62 is configured to calculate a disparity between the first view point and the second view point from the invisible light image
  • the obtaining module 64 is configured to move the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view point;
  • the forming module 63 is configured to form a three-dimensional image from the first color image and the second color image.
  • the invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, the first color The image is acquired by the color camera disposed at the second viewpoint.
  • the calculating module 62 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image; The displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
  • the calculating module 62 performs the disparity calculation between the first viewpoint and the second viewpoint by the displacement calculation, including: calculating, between the first viewpoint and the second viewpoint, by using the above formula 11 Parallax d.
  • the obtaining module 64 is specifically configured to establish, according to the disparity d, a first pixel coordinate I ir (u ir , v ir ) of the invisible light image and a second pixel coordinate I r (u of the first color image)
  • the calculating module 62 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image.
  • the forming module 63 is configured to average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line;
  • the invisible light image is an infrared image, and the invisible light image collector For infrared cameras.
  • the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the invisible light image and the first color image are photos or videos
  • the invisible light image collector and the color camera are collected when the invisible light image and the first color image are video.
  • Frequency synchronization or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, a video image of the same frequency is obtained by image interpolation.
  • FIG. 7 is a schematic structural diagram of an embodiment of a three-dimensional image rendering system according to the present invention.
  • the system 70 includes a projection module 74, an invisible image collector 71, a color camera 72, and an image processing device 73 connected to the invisible image collector 71 and the color camera 72.
  • the image processing device 73 includes an input interface 731, a processor 732, and a memory 733. Further, the image processing device 73 can also be connected to the projection module 74.
  • the input interface 731 is used to obtain images acquired by the invisible image collector 71 and the color camera 72.
  • the memory 733 is used to store a computer program and provide the computer program to the processor 732, and can store data used by the processor 732 for processing such as the internal parameter matrix and the external parameter matrix of the invisible light image collector 71 and the color camera 72. And the image obtained by the input interface 731.
  • the processor 732 is used to:
  • a three-dimensional image is formed from the first color image and the second color image.
  • the image processing device 73 may further include a display screen 734 for displaying the three-dimensional image to implement three-dimensional display.
  • the image processing device 73 is not used to display the three-dimensional image.
  • the three-dimensional image rendering system 70 further A display device 75 connected to the image processing device 73 for receiving a three-dimensional image output by the image processing device 73 and displaying the three-dimensional image is included.
  • the processor 732 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image;
  • the displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
  • the processor 732 performs the disparity calculation between the first view point and the second view point by the displacement calculation, including: calculating the first view point and the second view point by using Equation 1 below Parallax d between.
  • the processor 732 performs the moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view, including: establishing a first image of the invisible image according to the disparity d
  • the processor 732 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image.
  • the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera 72 and the invisible image collector 71 and the projection module 74 are in the same On the straight line; the invisible light image is an infrared image, and the invisible light image collector 71 is an infrared camera.
  • the color camera 72 and the invisible light image collector 71 have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  • the invisible light image and the first color image are photos or videos, when When the invisible light image and the first color image are video, the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed Interpolation is used to obtain video images of consistent frequency.
  • the image processing device 73 can be used as the above-described three-dimensional image drawing device for executing the method described in the above embodiments.
  • the method disclosed in the above embodiments of the present invention may also be applied to the processor 732 or implemented by the processor 732.
  • Processor 732 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 732 or an instruction in a form of software.
  • the processor 732 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 733, and the processor 732 reads the information in the corresponding memory and completes the steps of the above method in combination with the hardware thereof.
  • the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible image of the first viewpoint
  • the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax
  • further Forming a three-dimensional image from the first color image and the second color image since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, thereby reducing the loss of image detail information, thereby Accurately obtaining color images of two viewpoints, thereby reducing the distortion of the synthesized three-dimensional image and improving the three-dimensional display effect generated based on the two-dimensional image.

Abstract

Disclosed are a method, device and system for drawing a three-dimensional image. The method comprises: separately obtaining an invisible light image obtained by capturing a target from a first viewpoint and a first color image obtained by capturing the target from a second viewpoint; calculating a parallax between the first viewpoint and the second viewpoint by using the invisible light image; moving pixel coordinates of the first color image according to the parallax to obtain a second color image of the first viewpoint; and forming a three-dimensional image by using the first color image and the second color image. The method can improve a three-dimensional display effect.

Description

绘制三维图像的方法及其设备、系统Method for drawing three-dimensional image, device and system thereof 【技术领域】[Technical Field]
本发明涉及三维显示技术领域,特别是涉及绘制三维图像的方法及其设备、系统。The present invention relates to the field of three-dimensional display technology, and in particular, to a method for drawing a three-dimensional image, and an apparatus and system thereof.
【背景技术】【Background technique】
人类双眼由于位置不同在对具有一定距离的物体进行观看时会产生视觉差异,正是这种视差让人们有了三维的感观效果。三维显示技术根据这一原理,通过将同时获取的双眼图像分别被对应的眼睛接收,从而产生三维效果。由于这一技术给人们带来了全新的立体观看体验,近年来人们对三维图像资源的需求也日渐增加。Human eyes have visual differences when viewing objects with a certain distance due to their different positions. It is this parallax that gives people a three-dimensional sensory effect. According to this principle, the three-dimensional display technology generates a three-dimensional effect by respectively receiving the simultaneously acquired binocular images by the corresponding eyes. Since this technology has brought people a new stereoscopic viewing experience, the demand for 3D image resources has increased in recent years.
目前获得三维图像的方法之一是将二维图像通过图像处理技术转化为三维图像。具体为运用图像处理技术计算得到已有二维图像的场景深度信息,进而绘制出虚拟的其他视点图像,利用已有二维图像和虚拟的其他视点图像形成三维图像。One of the methods for obtaining a three-dimensional image at present is to convert a two-dimensional image into a three-dimensional image by image processing technology. Specifically, the image depth information of the existing two-dimensional image is calculated by using image processing technology, and then other virtual viewpoint images are drawn, and the three-dimensional image is formed by using the existing two-dimensional image and the virtual other viewpoint image.
由于用于绘制该其他视点图像的已有二维图像的深度信息是经过计算得到,这一过程会导致图像细节信息的流失,影响三维显示的效果。Since the depth information of the existing two-dimensional image used to draw the other viewpoint images is calculated, this process leads to the loss of the image detail information and affects the effect of the three-dimensional display.
【发明内容】[Summary of the Invention]
本发明主要解决的技术问题是提供绘制三维图像的方法及其设备、系统,能够提高三维显示效果。The technical problem mainly solved by the present invention is to provide a method for drawing a three-dimensional image, a device and a system thereof, and capable of improving a three-dimensional display effect.
为解决上述技术问题,本发明采用的一个技术方案是:提供一种绘制三维图像的方法,包括:分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像;由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;由所述第一彩色图像和所述第二彩色图像形成三维图像。 In order to solve the above technical problem, a technical solution adopted by the present invention is to provide a method for drawing a three-dimensional image, comprising: separately acquiring an invisible light image obtained by acquiring a target with a first viewpoint and respectively aiming at the target with a second viewpoint; Performing the first color image obtained by the acquisition; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; and moving the pixel coordinates of the first color image according to the parallax to obtain the first a second color image of the viewpoint; a three-dimensional image is formed by the first color image and the second color image.
其中,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。The invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, where the first color image is obtained by A color camera disposed at the second viewpoint acquires the target.
其中,所述由所述不可见光图像计算所述第一视点和所述第二视点之间的视差,包括:根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。The calculating the disparity between the first view point and the second view point by the invisible light image comprises: calculating the invisible light including the structured light pattern according to a matching algorithm of digital image processing a displacement between the image and each pixel of the preset reference structured light image; a disparity between the first viewpoint and the second viewpoint is calculated from the displacement, wherein the displacement has a linear relationship with the parallax.
其中,所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:利用下述公式1计算得到第一视点和所述第二视点之间的视差d,Wherein the disparity between the first view point and the second view point is calculated by the displacement, including: calculating a disparity d between the first view point and the second view point by using Equation 1 below,
Figure PCTCN2017085147-appb-000001
Figure PCTCN2017085147-appb-000001
其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。Wherein B 1 is the distance between the invisible image collector and the projection module, B 2 is the distance between the invisible image collector and the color camera; Z 0 is the reference structured light image The depth of the plane relative to the invisible image collector; f is the focal length of the invisible image collector and the color camera, and Δu is the displacement between the invisible image and the pixels of the preset reference structured light image.
其中,所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括:根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr);将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;对所述第二彩色图像进行平滑、去噪处理。The second color image of the first view is obtained by moving the pixel coordinates of the first color image according to the parallax, and the first pixel coordinate I ir of the invisible image is established according to the disparity d (u) The correspondence between ir , v ir ) and the second pixel coordinate I r (u r , v r ) of the first color image is: I ir (u ir , v ir )=I r (u r +d And v r ) setting a pixel value of the first pixel coordinate of the invisible light image as a pixel value of a second pixel coordinate corresponding to the first pixel coordinate in the first color image to form a a second color image of the target at the first viewpoint; and smoothing and denoising the second color image.
其中,还包括:利用所述不可见光图像计算得到所述第一视点的深度图像;利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;The method further includes: calculating, by using the invisible light image, a depth image of the first viewpoint; and using a three-dimensional image transformation theory, calculating the target according to the depth image of the first viewpoint and the first color image a third color image of the first viewpoint;
所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均 或者加权平均,得到所述第一视点的第四彩色图像;由所述第一彩色图像和所述第四彩色图像形成三维图像。Forming the three-dimensional image from the first color image and the second color image includes: averaging pixel values of corresponding pixels in the second color image and the third color image Or a weighted average to obtain a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
其中,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器为红外相机。The positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line; The invisible light image is an infrared image, and the invisible light image collector is an infrared camera.
其中,所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。Wherein, the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
为了解决上述技术问题,本发明采用另一技术方案是:提供一种图像处理设备,其中,包括输入接口、处理器和存储器;所述输入接口用于获得不可见光图像采集器和彩色相机采集得到的图像;所述存储器用于存储计算机程序;所述处理器执行所述计算机程序,用于:通过所述输入接口分别获取以第一视点的所述不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的所述彩色相机对所述目标进行采集得到的第一彩色图像;由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;由所述第一彩色图像和所述第二彩色图像形成三维图像。In order to solve the above technical problem, the present invention adopts another technical solution to provide an image processing device, which includes an input interface, a processor, and a memory; the input interface is used to obtain an invisible image collector and a color camera. The memory is used to store a computer program; the processor executes the computer program, respectively, by acquiring, by the input interface, the target of the invisible image collector of the first viewpoint a non-visible light image and a first color image obtained by collecting the target with the color camera of the second viewpoint; calculating a parallax between the first viewpoint and the second viewpoint from the invisible image; The parallax moves pixel coordinates of the first color image to obtain a second color image of the first viewpoint; and forms a three-dimensional image from the first color image and the second color image.
为了解决上述技术问题,本发明采用另一技术方案是:提供一种三维图像绘制系统,包括投影模组、不可见光图像采集器、彩色相机、与所述不可见光图像采集器和彩色相机连接的图像处理设备;所述图像处理设备用于:分别获取以第一视点的不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的彩色相机对所述目标进行采集得到的第一彩色图像;由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;由所述第一彩色图像和所述第二彩色图像形成三维图像。In order to solve the above technical problem, the present invention adopts another technical solution to provide a three-dimensional image drawing system, including a projection module, an invisible image collector, a color camera, and the invisible image collector and the color camera. An image processing device, configured to: respectively acquire an invisible light image obtained by acquiring an object by a non-visible light image collector of a first viewpoint and acquiring the target by using a color camera of a second viewpoint a color image; calculating a disparity between the first view point and the second view point by the invisible light image; moving a pixel coordinate of the first color image according to the disparity to obtain a second color of the first view point An image; a three-dimensional image is formed from the first color image and the second color image.
本发明利用采集得到的第一视点的不可见光图像得到第一视点和第二视点的视差,并利用第二视点的第一彩色图像和该视差得到第二视点的第二彩色图像,进而由第一彩色图像和第二彩色图像形成三维图像,由于该第一视点和第二视点的视差由采集得到的图像数据获得,而无需经过图像处理,因此减少了图像细节信息的流失,以更准确获得两个视点的彩色图 像,进而减少了合成的三维图像的失真度,提高了基于二维图像生成的三维显示效果。而且相对于现有的DIBR技术,本实施例无需计算得到图像的深度信息,避免了多次重复计算引入的误差,进一步提高了三维显示效果。The present invention obtains the parallax of the first viewpoint and the second viewpoint by using the acquired invisible light image of the first viewpoint, and obtains the second color image of the second viewpoint by using the first color image of the second viewpoint and the parallax, and further A color image and a second color image form a three-dimensional image, and since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced, so as to obtain more accurate Color map of two viewpoints The image, in turn, reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect based on the two-dimensional image generation. Moreover, compared with the existing DIBR technology, the embodiment does not need to calculate the depth information of the image, avoids the error introduced by repeated calculations, and further improves the three-dimensional display effect.
【附图说明】[Description of the Drawings]
图1是本发明绘制三维图像的方法一实施例的流程图;1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention;
图2是本发明绘制三维图像的方法一应用场景的示意图;2 is a schematic diagram of an application scenario of a method for drawing a three-dimensional image according to the present invention;
图3是本发明绘制三维图像的方法另一实施例的部分流程图;3 is a partial flow chart of another embodiment of a method for drawing a three-dimensional image according to the present invention;
图4是本发明绘制三维图像的方法再一实施例的部分流程图;4 is a partial flow chart of still another embodiment of a method for drawing a three-dimensional image according to the present invention;
图5是本发明绘制三维图像的方法又再一实施例的流程图;5 is a flow chart of still another embodiment of a method for drawing a three-dimensional image of the present invention;
图6是本发明三维图像绘制装置一实施例的结构示意图;6 is a schematic structural view of an embodiment of a three-dimensional image drawing apparatus according to the present invention;
图7是本发明三维图像绘制系统一实施例的结构示意图;7 is a schematic structural view of an embodiment of a three-dimensional image rendering system of the present invention;
图8是本发明三维图像绘制系统另一实施例的结构示意图。Figure 8 is a block diagram showing another embodiment of the three-dimensional image rendering system of the present invention.
【具体实施方式】【Detailed ways】
为了更好的理解本发明的技术方案,下面结合附图对本发明实施例进行详细描述。For a better understanding of the technical solutions of the present invention, the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terms used in the embodiments of the present invention are for the purpose of describing particular embodiments only and are not intended to limit the invention. The singular forms "a", "the" and "the" It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
请参阅图1,图1是本发明绘制三维图像的方法一实施例的流程图。本实施例中,该方法可由三维图像绘制装置执行,包括以下步骤:Please refer to FIG. 1. FIG. 1 is a flow chart of an embodiment of a method for drawing a three-dimensional image according to the present invention. In this embodiment, the method can be performed by a three-dimensional image rendering device, including the following steps:
S11:分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像。S11: Acquire an invisible light image obtained by collecting the target with the first viewpoint and a first color image obtained by acquiring the target by the second viewpoint.
值得注意的是,本发明所述的不可见光图像和彩色图像均为二维图像。该不可见光图像为获取目标上的不可见光的强度而形成的图像。It should be noted that the invisible light image and the color image according to the present invention are both two-dimensional images. The invisible light image is an image formed by acquiring the intensity of invisible light on the target.
其中,该第一视点和第二视点位于目标的不同位置,以获得该目标的 两个视点处的图像。通常,由于三维感观是由双眼观看到的不同图像叠加形成,故该第一视点和第二视点用于作为人体双眼的两个视点,即第一视点与第二视点之间的位置关系为人体双眼之间的位置关系。例如,常规人体双眼的距离为t,则将第一视点和第二视点之间的距离设置为t,该t具体如为6.5cm。而且,为保证第一视点和第二视点的图像深度相同或者类似,将第一视点和第二视点设置为与该目标的距离相同或者距离相差不超过设定阈值,在具体应用中,该设定阈值可设置为不大于10cm或20cm的值。Wherein the first viewpoint and the second viewpoint are located at different positions of the target to obtain the target An image at two viewpoints. Generally, since the three-dimensional sensory is formed by superimposing different images viewed by both eyes, the first viewpoint and the second viewpoint are used as two viewpoints of the eyes of the human body, that is, the positional relationship between the first viewpoint and the second viewpoint is The positional relationship between the eyes of the human body. For example, if the distance between the eyes of the conventional human body is t, the distance between the first viewpoint and the second viewpoint is set to t, which is specifically 6.5 cm. Moreover, in order to ensure that the image depths of the first view and the second view are the same or similar, the first view and the second view are set to be the same distance as the target or the distance does not exceed a set threshold. In a specific application, the device The threshold can be set to a value of no more than 10 cm or 20 cm.
在一具体应用中,如图2所示,该不可见光图像为在投影模组25向所述目标23投射结构光图案,由设置在所述第一视点的不可见光图像采集器21对所述目标23进行采集得到,该第一彩色图像由设置在所述第二视点的彩色相机22对目标23进行采集得到。不可见光图像采集器21和彩色相机将其采集得到的图像传输至三维图像绘制装置24,以进行下述三维图像的获取。由于彩色相机与不可见光图像采集器的位置不同,故该第一彩色图像与不可见光图像中的相同像素坐标上所对应的空间三维点并不相同。图2中,彩色相机22和所述不可见光图像采集器21以及所述投影模组25处于同一直线上,以使该彩色相机22和所述不可见光图像采集器21以及所述投影模组25对目标的深度相同。当然,图2仅作为一种实施例,在其他应用中,上述三种也可不在同一直线上。In a specific application, as shown in FIG. 2, the invisible light image is a projected light pattern projected onto the target 23 by the projection module 25, and the invisible light image collector 21 disposed at the first viewpoint The target 23 is acquired, and the first color image is acquired by the color camera 22 disposed at the second viewpoint. The invisible light image collector 21 and the color camera transmit the acquired images thereof to the three-dimensional image drawing device 24 to perform acquisition of the following three-dimensional images. Since the position of the color camera and the invisible image collector is different, the spatial three-dimensional points corresponding to the same pixel coordinates in the first color image and the invisible image are not the same. In FIG. 2, the color camera 22 and the invisible light image collector 21 and the projection module 25 are on the same line, so that the color camera 22 and the invisible light image collector 21 and the projection module 25 are The depth of the target is the same. Of course, FIG. 2 is only used as an embodiment. In other applications, the above three types may not be on the same line.
具体地,投影模组25一般由激光及衍射光学元件组成,激光可以是边发射型的激光也可以是垂直腔面激光,该激光为能被该不可见光图像采集器采集得到的不可见光。衍射光学元件根据不同的结构光图案需要可以被设置成具有准直、分束、扩散等功能。上述结构光图案可以为分布不规则的散斑图案,散斑中心能级需要符合对人体无害的要求,因此需要综合考虑激光的功率以及衍射光学元件的配置情况。Specifically, the projection module 25 is generally composed of a laser and a diffractive optical element. The laser may be an edge-emitting laser or a vertical cavity laser, which is an invisible light that can be collected by the invisible image collector. The diffractive optical element may be configured to have functions such as collimation, splitting, diffusion, etc. according to different structural light patterns. The structured light pattern may be an irregularly distributed speckle pattern, and the speckle center level needs to meet the requirements for harmlessness to the human body. Therefore, it is necessary to comprehensively consider the power of the laser and the arrangement of the diffractive optical element.
散斑图案的密集程度影响了深度值计算的速度及精度,散斑颗粒越多,计算速度越慢,但精度却越高。因此,该投影模组25可根据拍摄图像的目标区域的大致深度,选择合适的散斑颗粒密度,在保证计算速度的同时,仍有着较高的计算精度。当然,该散斑颗粒密度也可由上述三维图像绘制装置24根据自身的计算需求而确定的,并将该确定的密度信息发送至投影模组25。 The intensity of the speckle pattern affects the speed and accuracy of the depth value calculation. The more speckle particles, the slower the calculation speed, but the higher the accuracy. Therefore, the projection module 25 can select an appropriate speckle particle density according to the approximate depth of the target area of the captured image, and still has a high calculation precision while ensuring the calculation speed. Of course, the speckle particle density can also be determined by the three-dimensional image rendering device 24 according to its own calculation requirements, and the determined density information is sent to the projection module 25.
其中,该投影模组25向目标区域是但不限是以一定的扩散角投射散斑颗粒图案的。The projection module 25 is, but is not limited to, projecting the speckle particle pattern at a certain diffusion angle to the target area.
在投影模组25向目标投射结构光图像后,不可见光图像采集器21采集目标的不可见光图像。具体,该不可见光可以为任意不可见光,例如,该不可见光图像采集器21可以为红外采集器,如红外相机,该不可见光图像为红外图像;或者不可见光图像采集器21可以为紫外采集器,如紫外相机,该不可见光图像为紫外图像。After the projection module 25 projects the structured light image to the target, the invisible light image collector 21 collects the invisible light image of the target. Specifically, the invisible light may be any invisible light. For example, the invisible light image collector 21 may be an infrared collector, such as an infrared camera, and the invisible image is an infrared image; or the invisible image collector 21 may be an ultraviolet collector. Such as an ultraviolet camera, the invisible image is an ultraviolet image.
为了达到好的采集效果以及避免后续多余的计算,可将彩色相机与不可见光图像采集器设置成同步采集且采集帧数相同,这样得到的彩色图像与不可见光图像能保证一一对应的关系,便于后续计算。In order to achieve a good acquisition effect and avoid unnecessary calculations, the color camera and the invisible image collector can be set to be synchronously acquired and the number of acquisition frames is the same, so that the obtained color image and the invisible image can ensure a one-to-one correspondence. Easy for subsequent calculations.
S12:由所述不可见光图像计算所述第一视点和所述第二视点之间的视差。S12: Calculate a disparity between the first view point and the second view point from the invisible light image.
例如,利用数字图像处理的匹配算法比如图数字图像相关(DIC)算法计算得到第一视点的图像与第二视点的图像之间的视差,即第一视点的图像与第二视点的图像的像素坐标之间的相对位置关系。For example, a matching algorithm such as a digital image correlation (DIC) algorithm using digital image processing calculates a parallax between an image of the first viewpoint and an image of the second viewpoint, that is, an image of the first viewpoint and a pixel of the second viewpoint image. The relative positional relationship between the coordinates.
S13:按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像。S13: Move pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view.
例如,将该第一彩色图像的像素坐标移动各自像素对应的图像视差值d,其中,移动得到的像素坐标(u1+d,v1)的像素值(又称为RGB值)为第一彩色图像中的像素坐标(u1,v1)的像素值。For example, the pixel coordinates of the first color image are shifted by the image disparity value d corresponding to the respective pixels, wherein the pixel values (also referred to as RGB values) of the obtained pixel coordinates (u 1 +d, v 1 ) are The pixel value of the pixel coordinates (u 1 , v 1 ) in a color image.
S14:由所述第一彩色图像和第二彩色图像形成三维图像。S14: Forming a three-dimensional image from the first color image and the second color image.
例如,将第一彩色图像和第二彩色图像分别作为人体双眼图像,以合成三维图像,具体地可以是上下格式、左右格式或者红蓝格式的用于3D显示的三维图像。进一步地,在合成三维图像后,还可将该三维图像进行显示,或者输出至连接的外部显示装置进行显示。For example, the first color image and the second color image are respectively used as a human body binocular image to synthesize a three-dimensional image, and specifically may be a three-dimensional image for 3D display in a top-bottom format, a left-right format, or a red-blue format. Further, after the three-dimensional image is synthesized, the three-dimensional image may also be displayed or output to a connected external display device for display.
本实施例中,利用采集得到的第一视点的不可见光图像得到第一视点和第二视点的视差,并利用第二视点的第一彩色图像和该视差得到第二视点的第二彩色图像,进而由第一彩色图像和第二彩色图像形成三维图像,由于该第一视点和第二视点的视差由采集得到的图像数据获得,而无需经过图像处理,因此减少了图像细节信息的流失,以更准确获得两个视点的 彩色图像,进而减少了合成的三维图像的失真度,提高了基于二维图像生成的三维显示效果。而且相对于现有的深度图像绘制(depth-image-based rendering,DIBR)技术,本实施例无需计算得到图像的深度信息,避免了多次重复计算引入的误差,进一步提高了三维显示效果。In this embodiment, the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible light image of the first viewpoint, and the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax. Further, the first color image and the second color image are formed into a three-dimensional image. Since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, the loss of image detail information is reduced. More accurate access to two viewpoints The color image, in turn, reduces the distortion of the synthesized three-dimensional image and improves the three-dimensional display effect generated based on the two-dimensional image. Moreover, compared with the existing depth-image-based rendering (DIBR) technology, the embodiment does not need to calculate the depth information of the image, avoids the error introduced by repeated calculations, and further improves the three-dimensional display effect.
请参阅图3,在另一实施例中,该不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到,本实施例与上述实施例的区别在于,上述S12包括以下子步骤:Referring to FIG. 3, in another embodiment, the invisible light image is a projected light pattern projected onto the target by the projection module, and the target is collected by an invisible light image collector disposed at the first viewpoint. It is obtained that the first color image is obtained by collecting the target by a color camera disposed at the second viewpoint. The difference between this embodiment and the above embodiment is that the foregoing S12 includes the following sub-steps:
S121:根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移。S121: Calculate a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image according to a matching algorithm of the digital image processing.
该数字图像处理的匹配算法如数字图像相关算法。该参考结构光图案是预先利用已设置好的投影模组向设定距离的平面投射参考结构光图案,并利用已设置好的不可见光图像采集器采集该平面的参考结构光图案得到的,上述的“设置好”应理解为一旦设置好之后,在后续进行该不可见光图像的采集时亦不会移动该图像采集器和投影模组。The matching algorithm of the digital image processing is a digital image correlation algorithm. The reference structured light pattern is obtained by previously projecting a reference structured light pattern onto a plane of a set distance by using the set projection module, and acquiring the reference structured light pattern of the plane by using the set invisible light image collector, The "set up" should be understood as that the image collector and the projection module are not moved when the invisible image is subsequently acquired after being set.
例如,利用数字图像相关算法获得不可见光图像与参考结构光图案如参考散斑图像之间各对应像素的位移值Δu。目前数字图像相关算法的测量精度能达到亚像素级,比如1/8像素,也就是说Δu的值会是1/8的倍数,单位为像素。For example, a digital image correlation algorithm is used to obtain a displacement value Δu of each corresponding pixel between the invisible light image and the reference structured light pattern such as the reference speckle image. At present, the measurement accuracy of the digital image correlation algorithm can reach sub-pixel level, such as 1/8 pixel, that is, the value of Δu will be a multiple of 1/8, and the unit is pixel.
S122:由所述位移计算得到第一视点和所述第二视点之间的视差。S122: Calculate a disparity between the first viewpoint and the second viewpoint by the displacement.
由于该不可见光图像与该参考结构光图像的各像素之间的位移与该视差具有线性关系。故可位移以及其线性关系计算得到第一视点和所述第二视点之间的视差。The displacement between the invisible image and each pixel of the reference structured light image has a linear relationship with the parallax. Therefore, the disparity between the first viewpoint and the second viewpoint can be calculated by the displacement and its linear relationship.
例如,利用下述公式11计算得到第一视点和所述第二视点之间的视差d,For example, the parallax d between the first viewpoint and the second viewpoint is calculated by the following formula 11,
Figure PCTCN2017085147-appb-000002
Figure PCTCN2017085147-appb-000002
其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图 像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。该参考结构光图像所在平面即为之前投射该参考结构光图案所在的平面,该Z0用于表示该平面距离该图像采集器之间的距离,可由之前测试该参考结构光图像时该平面的距离信息得到。本实施例中,f的单位为像素,f的值可预先经过标定得到。Wherein B 1 is the distance between the invisible image collector and the projection module, B 2 is the distance between the invisible image collector and the color camera; Z 0 is the reference structured light image The depth of the plane relative to the invisible image collector; f is the focal length of the invisible image collector and the color camera, and Δu is the displacement between the invisible image and the pixels of the preset reference structured light image. The plane of the reference structured light image is the plane on which the reference structured light pattern is projected, and the Z 0 is used to indicate the distance between the plane and the image collector, which can be used when the reference structured light image is previously tested. Distance information is obtained. In this embodiment, the unit of f is a pixel, and the value of f can be obtained by calibration in advance.
当计算得到的视差d的数值不为整数时,可对其进行四舍五入或取整等处理。When the calculated value of the parallax d is not an integer, it may be rounded or rounded.
请参阅图4,在再一实施例中,其与上述实施例的区别在于,上述13包括以下子步骤:Referring to FIG. 4, in still another embodiment, it differs from the above embodiment in that the above 13 includes the following sub-steps:
S131:根据视差建立所述不可见光图像的第一像素坐标与所述第一彩色图像的第二像素坐标之间的对应关系。S131: Establish a correspondence between a first pixel coordinate of the invisible light image and a second pixel coordinate of the first color image according to a parallax.
例如,根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr)。For example, based on the disparity d, establishing a first pixel coordinate I ir (u ir , v ir ) of the invisible light image and a second pixel coordinate I r (u r , v r ) of the first color image The correspondence is: I ir (u ir , v ir )=I r (u r +d, v r ).
S132:将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像。S132: setting a pixel value of the first pixel coordinate of the invisible light image as a pixel value of a second pixel coordinate corresponding to the first pixel coordinate in the first color image to form the target A second color image of the first viewpoint.
例如,根据对应关系,将第一彩色图像的像素值(也可称为RGB值)赋值于不可见光图像,以生成第二彩色图像。以图像的其中一个像素坐标举例,若d为1,则不可见光图像的像素坐标(1,1)与第一彩色图像的像素坐标(2,1)对应。然后,将不可见光图像的像素坐标(1,1)的像素值设置为第一彩色图像中像素坐标(2,1)的像素值(r,g,b)。For example, a pixel value (also referred to as an RGB value) of the first color image is assigned to the invisible light image according to the correspondence relationship to generate a second color image. Taking one of the pixel coordinates of the image as an example, if d is 1, the pixel coordinates (1, 1) of the invisible light image correspond to the pixel coordinates (2, 1) of the first color image. Then, the pixel value of the pixel coordinate (1, 1) of the invisible light image is set as the pixel value (r, g, b) of the pixel coordinate (2, 1) in the first color image.
S133:对所述第二彩色图像进行平滑、去噪处理。S133: Perform smoothing and denoising processing on the second color image.
由于位移值Δu的数据常常出现一些坏点,导致最终得到的彩色图像中出现一些空洞等问题,在后面步骤中进一步处理时会将这些数据进行放大,进而严重影响三维显示效果,为避免深度图像的坏点或区域数据对三维显示的影响,本子步骤对得到的第二彩色图像进行去噪、平滑处理。Since the data of the displacement value Δu often has some bad points, some voids and other problems appear in the final color image, and the data will be amplified when further processing is performed in the subsequent steps, thereby seriously affecting the three-dimensional display effect, in order to avoid the depth image. The effect of the bad point or area data on the three-dimensional display, the sub-step performs denoising and smoothing on the obtained second color image.
当然,在其他实施例中,上述S13步骤可以仅包括上述S131和S132 子步骤。Of course, in other embodiments, the foregoing step S13 may include only the foregoing S131 and S132. Substeps.
请参阅图5,在又再一实施例中,在上述S11之后,还包括以下步骤:Referring to FIG. 5, in still another embodiment, after the foregoing S11, the following steps are further included:
S15:利用所述不可见光图像计算得到所述第一视点的深度图像。S15: Calculate a depth image of the first viewpoint by using the invisible light image.
例如,利用红外图像计算出该第一视点的深度图像,其具体计算方式可采用现有的相关算法。For example, the depth image of the first viewpoint is calculated by using an infrared image, and the specific calculation manner may adopt an existing correlation algorithm.
S16:利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像。S16: Calculate a third color image of the target at the first viewpoint according to the depth image of the first viewpoint and the first color image by using a three-dimensional image transformation theory.
根据三维图像转换(3D Image Wrapping)理论——空间任一三维坐标点与图像采集平面上的二维坐标点可通过透射变换理论对应起来,故由此理论可将第一视点和第二视点的图像的像素坐标对应起来,并根据该对应关系和第二视点的第一彩色图像的像素值,为第一视点的图像像素坐标设置第二视点的第一彩色图像中对应像素坐标的像素值。According to the theory of 3D Image Wrapping, any three-dimensional coordinate point in space and two-dimensional coordinate points on the image acquisition plane can be related by the theory of transmission transformation, so the theory can be the first viewpoint and the second viewpoint. The pixel coordinates of the image are associated, and the pixel value of the corresponding pixel coordinate in the first color image of the second viewpoint is set for the image pixel coordinates of the first viewpoint according to the correspondence relationship and the pixel value of the first color image of the second viewpoint.
例如,该S16包括以下子步骤:For example, the S16 includes the following substeps:
a:利用下面公式12计算得到所述第一视点的深度图像的第一像素坐标(uD,vD)与所述第一彩色图像的第二像素坐标(uR,vR)之间的对应关系,a: using the following formula 12 to calculate the first pixel coordinate (u D , v D ) of the depth image of the first viewpoint and the second pixel coordinate (u R , v R ) of the first color image Correspondence relationship,
Figure PCTCN2017085147-appb-000003
Figure PCTCN2017085147-appb-000003
其中,所述ZD为所述第一深度图像中的深度信息,表示所述目标距离所述深度相机的深度值;ZR表示所述目标距离所述彩色相机的深度值;
Figure PCTCN2017085147-appb-000004
为所述彩色相机的图像坐标系上的像素齐次坐标;
Figure PCTCN2017085147-appb-000005
为所述深度相机的图像坐标系上的像素齐次坐标;Mg为所述彩色相机的内参矩阵,MD为所述深度相机的内参矩阵;R为深度相机相对于彩色相机的外参矩阵中的旋转矩阵,T为深度相机相对于彩色相机的外参矩阵中的平移矩阵。
Wherein, the Z D is depth information in the first depth image, indicating a depth value of the target distance from the depth camera; and Z R represents a depth value of the target distance from the color camera;
Figure PCTCN2017085147-appb-000004
a pixel homogeneous coordinate on an image coordinate system of the color camera;
Figure PCTCN2017085147-appb-000005
The homogeneous coordinates for the pixel in the depth image coordinate system of the camera; M g is the internal reference matrix color camera, M D is the matrix of intrinsic depth camera; R & lt depth camera is a color camera with respect to the external reference matrix In the rotation matrix, T is the translation matrix in the outer parameter matrix of the depth camera relative to the color camera.
上述相机及采集器的内参矩阵和外参矩阵可预先设定的,具体该内参矩阵可根据相机及采集器的设置参数计算得到,该外参矩阵可由不可见光图像采集器与彩色相机之间的位置关系确定。在一具体实施例中,由相机及采集器的图像采集镜头的像素焦距以及图像采集靶面的中心位置坐标构成的内部参数矩阵。由于第一视点和第二视点的位置关系设置为人眼双眼的位置关系,人体双眼之间没有任何的相对旋转而仅有设定值t的距离,因 此彩色相机相对于不可见光图像采集器的旋转矩阵R为单位矩阵,平移矩阵T=[t,0,0]-1The internal reference matrix and the external parameter matrix of the camera and the collector may be preset, and the internal reference matrix may be calculated according to setting parameters of the camera and the collector, and the external reference matrix may be between the invisible image collector and the color camera. The positional relationship is determined. In one embodiment, the internal parameter matrix formed by the pixel focal length of the image capture lens of the camera and the collector and the central position coordinates of the image acquisition target surface. Since the positional relationship between the first viewpoint and the second viewpoint is set to the positional relationship of the eyes of the human eye, there is no relative rotation between the eyes of the human body and only the distance of the set value t, so the rotation of the color camera with respect to the invisible image collector The matrix R is an identity matrix, and the translation matrix T = [t, 0, 0] -1 .
进一步地,该设定值t可根据不可见光图像采集器和彩色相机与目标的距离进行调整。在再一实施例中,在上述S11之前还包括以下步骤:获取目标与不可见光图像采集器和彩色相机的距离;当判断所述目标与所述不可见光图像采集器和所述彩色相机的距离均大于第一距离值时,将所述设定值t调大;当判断所述目标与所述不可见光图像采集器和所述彩色相机的距离均小于第二距离值时,将所述设定值t调小。Further, the set value t can be adjusted according to the distance between the invisible light image collector and the color camera and the target. In still another embodiment, before the foregoing S11, the method further includes: acquiring a distance between the target and the invisible image collector and the color camera; and determining a distance between the target and the invisible image collector and the color camera When the value is greater than the first distance value, the set value t is increased; when it is determined that the distance between the target and the invisible image collector and the color camera is less than the second distance value, the setting is The setting value t is small.
其中,所述第一距离值大于或等于所述第二距离值。例如,当目标与不可见光图像采集器的距离为100cm,目标与彩色相机的距离也为100cm,由于100cm小于第二距离值200cm,则将设定值调小一个步长值,或者按照当前目标与不可见光图像采集器和彩色相机的距离计算得到调小值后进行调整。当目标与不可见光图像采集器和彩色相机的距离均为300cm,由于300cm大于第二距离值200且小于第一距离值500cm,则不将该设定值进行调整。The first distance value is greater than or equal to the second distance value. For example, when the distance between the target and the invisible image collector is 100 cm, the distance between the target and the color camera is also 100 cm, and since the 100 cm is smaller than the second distance value of 200 cm, the set value is reduced by one step value, or according to the current target. The distance between the invisible image collector and the color camera is calculated and adjusted. When the distance between the target and the invisible image collector and the color camera is 300 cm, since the 300 cm is larger than the second distance value 200 and smaller than the first distance value of 500 cm, the set value is not adjusted.
b:将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第三彩色图像。b: setting a pixel value of the first pixel coordinate of the invisible light image to a pixel value of a second pixel coordinate corresponding to the first pixel coordinate in the first color image to form the target The third color image of the first viewpoint.
例如,将第一视点的不可见光图像的深度信息ZD代入上述公式12后,可求得公式12左边的第二视点的深度信息也即第一彩色图像的深度信息ZR,以及第一彩色图像的图像坐标系上的像素齐次坐标
Figure PCTCN2017085147-appb-000006
在本实施例中,不可见光图像采集器和彩色相机与目标的距离相同,即求得的ZR与ZD是相等的。由像素齐次坐标
Figure PCTCN2017085147-appb-000007
可得到与该不可见光图像的第一像素坐标(uD,vD)一一对应的第一彩色图像的第二像素坐标(uR,vR),例如其对应关系为(uR,vR)=(uD+d,vD)。然后,根据对应关系,将第一彩色图像的像素值赋值于不可见光图像,以生成第三彩色图像。
For example, after substituting the depth information Z D of the invisible light image of the first viewpoint into the above formula 12, the depth information of the second viewpoint on the left side of the formula 12, that is, the depth information Z R of the first color image, and the first color can be obtained. Pixel homogeneous coordinates of the image coordinate system of the image
Figure PCTCN2017085147-appb-000006
In this embodiment, the invisible light image collector and the color camera are at the same distance from the target, that is, the obtained Z R and Z D are equal. Homogeneous coordinates by pixel
Figure PCTCN2017085147-appb-000007
Obtaining a second pixel coordinate (u R , v R ) of the first color image in one-to-one correspondence with the first pixel coordinates (u D , v D ) of the invisible light image, for example, the corresponding relationship is (u R , v R )=( u D +d, v D ). Then, according to the correspondence relationship, the pixel value of the first color image is assigned to the invisible light image to generate a third color image.
在该又再一实施例中,上述S14包括以下步骤:In still another embodiment, the foregoing S14 includes the following steps:
S141:将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像。 S141: Average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a fourth color image of the first view.
以彩色图像中的一像素坐标举例,第二彩色图像和第三彩色图像中的像素坐标(Ur,Vr)的像素值分别为(r1,g1,b1)和(r2,g2,b2),则将第一视点的第四彩色图像中的像素坐标(Ur,Vr)的像素值设置为
Figure PCTCN2017085147-appb-000008
Taking one pixel coordinate in the color image as an example, the pixel values of the pixel coordinates (Ur, Vr) in the second color image and the third color image are (r1, g1, b1) and (r2, g2, b2), respectively. Setting the pixel value of the pixel coordinates (Ur, Vr) in the fourth color image of the first viewpoint to
Figure PCTCN2017085147-appb-000008
S142:由所述第一彩色图像和所述第四彩色图像形成三维图像。S142: Form a three-dimensional image from the first color image and the fourth color image.
例如,将第一彩色图像和第四彩色图像分别作为人体双眼图像,以合成三维图像。For example, the first color image and the fourth color image are respectively used as a human binocular image to synthesize a three-dimensional image.
可以理解的是,上述实施例中,可设置该不可见光图像采集器和彩色相机的图像采集靶面大小相等、分辨率相同以及焦距相同。或者,彩色相机和所述不可见光图像采集器的图像采集靶面大小、分辨率以及焦距中的至少一个不相同,例如彩色相机的靶面大小以及分辨率都比不可见光图像采集器大,此时,上述S13之后,该获得方法还包括:对所述第一彩色图像和/或所述第二彩色图像进行插值、分割处理,使得所述第一彩色图像和所述第二彩色图像对应的目标区域相同,且图像大小与分辨率也相同。由于彩色相机与不可见光图像采集器在装配时存在误差,故上述该不可见光图像采集器和彩色相机的图像采集靶面大小相等、分辨率相同以及焦距相同应理解为:该不可见光图像采集器和彩色相机的图像采集靶面大小、分辨力和焦距为在允许误差的范围内的相同。It can be understood that, in the foregoing embodiment, the image acquisition target surface of the invisible light image collector and the color camera may be set to be equal in size, the resolution is the same, and the focal length is the same. Alternatively, the color camera and the invisible image collector have different image acquisition target sizes, resolutions, and focal lengths, for example, the color camera has a larger target size and resolution than the invisible image collector. After the above S13, the obtaining method further comprises: performing interpolation and segmentation processing on the first color image and/or the second color image, so that the first color image and the second color image correspond to The target area is the same and the image size and resolution are the same. Since the color camera and the invisible image collector have errors in assembly, the image acquisition target surface of the invisible image collector and the color camera is equal in size, the resolution is the same, and the focal length is the same: the invisible image collector The image acquisition target size, resolution, and focal length of the color camera are the same within the tolerance range.
而且,上述图像包括照片或者视频,当上述图像为视频时,所述不可见光图像采集器和彩色相机的采集频率同步,或者若不可见光图像采集器和彩色相机的采集频率不同步,则通过图像插值的方式获得频率一致的视频图像。Moreover, the image includes a photo or a video, and when the image is a video, the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed. Interpolation is used to obtain video images of consistent frequency.
请参阅图6,图6是本发明三维图像绘制装置一实施例的结构示意图。本实施例中,该绘制装置60包括获取模块61、计算模块62、形成模块63和得到模块64。其中,Please refer to FIG. 6. FIG. 6 is a schematic structural diagram of an embodiment of a three-dimensional image drawing apparatus according to the present invention. In this embodiment, the drawing device 60 includes an obtaining module 61, a calculating module 62, a forming module 63, and a getting module 64. among them,
获取模块61用于分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像;The acquiring module 61 is configured to respectively acquire an invisible light image obtained by acquiring the target by the first viewpoint and a first color image obtained by collecting the target by the second viewpoint;
计算模块62用于由所述不可见光图像计算所述第一视点和所述第二视点之间的视差; The calculating module 62 is configured to calculate a disparity between the first view point and the second view point from the invisible light image;
得到模块64用于按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;The obtaining module 64 is configured to move the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view point;
形成模块63用于由所述第一彩色图像和所述第二彩色图像形成三维图像。The forming module 63 is configured to form a three-dimensional image from the first color image and the second color image.
可选地,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。Optionally, the invisible light image is obtained by projecting a structured light pattern to the target in a projection module, and acquiring the target by an invisible light image collector disposed at the first viewpoint, the first color The image is acquired by the color camera disposed at the second viewpoint.
可选地,计算模块62具体用于根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。Optionally, the calculating module 62 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image; The displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
进一步可选地,计算模块62执行所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:利用上述公式11计算得到第一视点和所述第二视点之间的视差d。Further optionally, the calculating module 62 performs the disparity calculation between the first viewpoint and the second viewpoint by the displacement calculation, including: calculating, between the first viewpoint and the second viewpoint, by using the above formula 11 Parallax d.
可选地,得到模块64具体用于根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr);将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;对所述第二彩色图像进行平滑、去噪处理。Optionally, the obtaining module 64 is specifically configured to establish, according to the disparity d, a first pixel coordinate I ir (u ir , v ir ) of the invisible light image and a second pixel coordinate I r (u of the first color image) The correspondence between r , v r ) is: I ir (u ir , v ir )=I r (u r +d, v r ); setting the pixel value of the first pixel coordinate of the invisible light image to a pixel value of a second pixel coordinate corresponding to the first pixel coordinate in the first color image to form a second color image of the target at a first viewpoint; smoothing the second color image Denoising processing.
可选地,计算模块62还用于利用所述不可见光图像计算得到所述第一视点的深度图像;利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;该形成模块63具体用于将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;由所述第一彩色图像和所述第四彩色图像形成三维图像。Optionally, the calculating module 62 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image. Obtaining a third color image of the target at a first viewpoint; the forming module 63 is configured to average or weight average the pixel values of the corresponding pixels in the second color image and the third color image to obtain a a fourth color image of the first viewpoint; a three-dimensional image is formed by the first color image and the fourth color image.
可选地,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器 为红外相机。Optionally, the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection module are on the same straight line; The invisible light image is an infrared image, and the invisible light image collector For infrared cameras.
可选地,所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。Optionally, the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
可选地,所述不可见光图像和所述第一彩色图像为照片或者视频,当所述不可见光图像和所述第一彩色图像为视频时,所述不可见光图像采集器和彩色相机的采集频率同步,或者若不可见光图像采集器和彩色相机的采集频率不同步,则通过图像插值的方式获得频率一致的视频图像。Optionally, the invisible light image and the first color image are photos or videos, and the invisible light image collector and the color camera are collected when the invisible light image and the first color image are video. Frequency synchronization, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, a video image of the same frequency is obtained by image interpolation.
其中,该绘制装置的上述模块分别用于执行上述方法实施例中的相应步骤,具体执行过程如上方法实施例说明,在此不作赘述。The above-mentioned modules of the drawing device are respectively used to perform the corresponding steps in the foregoing method embodiments, and the specific execution process is described in the foregoing method embodiment, and details are not described herein.
请参阅图7,图7是本发明三维图像绘制系统一实施例方式的结构示意图。本实施例中,该系统70包括投影模组74、不可见光图像采集器71、彩色相机72、与所述不可见光图像采集器71和彩色相机72连接的图像处理设备73。该图像处理设备73包括输入接口731、处理器732、存储器733。进一步地,该图像处理设备73也可与投影模组74连接。Please refer to FIG. 7. FIG. 7 is a schematic structural diagram of an embodiment of a three-dimensional image rendering system according to the present invention. In this embodiment, the system 70 includes a projection module 74, an invisible image collector 71, a color camera 72, and an image processing device 73 connected to the invisible image collector 71 and the color camera 72. The image processing device 73 includes an input interface 731, a processor 732, and a memory 733. Further, the image processing device 73 can also be connected to the projection module 74.
该输入接口731用于获得不可见光图像采集器71和彩色相机72采集得到的图像。The input interface 731 is used to obtain images acquired by the invisible image collector 71 and the color camera 72.
存储器733用于存储计算机程序,并向处理器732提供所述计算机程序,且可存储处理器732处理时所采用的数据如不可见光图像采集器71和彩色相机72的内参矩阵和外参矩阵等,以及输入接口731获得的图像。The memory 733 is used to store a computer program and provide the computer program to the processor 732, and can store data used by the processor 732 for processing such as the internal parameter matrix and the external parameter matrix of the invisible light image collector 71 and the color camera 72. And the image obtained by the input interface 731.
处理器732用于:The processor 732 is used to:
通过输入接口731分别获取以第一视点的不可见光图像采集器71对目标进行采集得到的不可见光图像和以第二视点的彩色相机72对所述目标进行采集得到的第一彩色图像;Obtaining, by the input interface 731, the invisible light image obtained by collecting the target by the invisible light image collector 71 of the first viewpoint and the first color image obtained by collecting the target by the color camera 72 of the second viewpoint;
由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;Calculating a parallax between the first viewpoint and the second viewpoint from the invisible image;
按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;Moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first viewpoint;
由所述第一彩色图像和所述第二彩色图像形成三维图像。A three-dimensional image is formed from the first color image and the second color image.
本实施例中,图像处理设备73还可包括显示屏734,该显示屏734用于显示该三维图像,以实现三维显示。当然,在另一实施例中,图像处理设备73不用于显示该三维图像,如图8所示,该三维图像绘制系统70还 包括与图像处理设备73连接的显示设备75,显示设备75用于接收图像处理设备73输出的三维图像,并显示该三维图像。In this embodiment, the image processing device 73 may further include a display screen 734 for displaying the three-dimensional image to implement three-dimensional display. Of course, in another embodiment, the image processing device 73 is not used to display the three-dimensional image. As shown in FIG. 8, the three-dimensional image rendering system 70 further A display device 75 connected to the image processing device 73 for receiving a three-dimensional image output by the image processing device 73 and displaying the three-dimensional image is included.
可选地,处理器732具体用于根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。Optionally, the processor 732 is specifically configured to calculate, according to a matching algorithm of the digital image processing, a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image; The displacement calculation calculates a disparity between the first viewpoint and the second viewpoint, wherein the displacement has a linear relationship with the parallax.
进一步可选地,处理器732执行所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:利用下述公式1计算得到第一视点和所述第二视点之间的视差d。Further optionally, the processor 732 performs the disparity calculation between the first view point and the second view point by the displacement calculation, including: calculating the first view point and the second view point by using Equation 1 below Parallax d between.
可选地,处理器732执行所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括:根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Iir(uir,vir)=Ir(ur+d,vr);将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;对所述第二彩色图像进行平滑、去噪处理。Optionally, the processor 732 performs the moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first view, including: establishing a first image of the invisible image according to the disparity d The correspondence relationship between the pixel coordinates I ir (u ir , v ir ) and the second pixel coordinates I r (u r , v r ) of the first color image is: I ir (u ir , v ir )=I r (u r +d, v r ); setting a pixel value of the first pixel coordinate of the invisible light image to a second pixel coordinate of the first color image having a corresponding relationship with the first pixel coordinate a pixel value to form a second color image of the target at a first viewpoint; and smoothing, denoising the second color image.
可选地,处理器732还用于利用所述不可见光图像计算得到所述第一视点的深度图像;利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;处理器732执行所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;由所述第一彩色图像和所述第四彩色图像形成三维图像。Optionally, the processor 732 is further configured to calculate, by using the invisible light image, a depth image of the first view; and use a three-dimensional image transform theory to calculate, according to the depth image of the first view and the first color image. Obtaining a third color image of the target at a first viewpoint; the processor 732 performing the forming of the three-dimensional image by the first color image and the second color image, comprising: the second color image and the The pixel values of the corresponding pixels in the third color image are averaged or weighted averaged to obtain a fourth color image of the first viewpoint; and the three-dimensional image is formed by the first color image and the fourth color image.
可选地,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机72和所述不可见光图像采集器71以及所述投影模组74处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器71为红外相机。Optionally, the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera 72 and the invisible image collector 71 and the projection module 74 are in the same On the straight line; the invisible light image is an infrared image, and the invisible light image collector 71 is an infrared camera.
可选地,所述彩色相机72和所述不可见光图像采集器71的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。Optionally, the color camera 72 and the invisible light image collector 71 have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
可选地,所述不可见光图像和所述第一彩色图像为照片或者视频,当 所述不可见光图像和所述第一彩色图像为视频时,所述不可见光图像采集器和彩色相机的采集频率同步,或者若不可见光图像采集器和彩色相机的采集频率不同步,则通过图像插值的方式获得频率一致的视频图像。Optionally, the invisible light image and the first color image are photos or videos, when When the invisible light image and the first color image are video, the acquisition frequency of the invisible image collector and the color camera is synchronized, or if the acquisition frequency of the invisible image collector and the color camera are not synchronized, the image is passed Interpolation is used to obtain video images of consistent frequency.
该图像处理设备73可作为上述三维图像绘制装置,用于执行上述实施例所述方法。例如,上述本发明实施方式揭示的方法也可以应用于处理器732中,或者由处理器732实现。处理器732可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器732中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器732可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器733,处理器732读取相应存储器中的信息,结合其硬件完成上述方法的步骤。The image processing device 73 can be used as the above-described three-dimensional image drawing device for executing the method described in the above embodiments. For example, the method disclosed in the above embodiments of the present invention may also be applied to the processor 732 or implemented by the processor 732. Processor 732 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 732 or an instruction in a form of software. The processor 732 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component. The methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like. The storage medium is located in the memory 733, and the processor 732 reads the information in the corresponding memory and completes the steps of the above method in combination with the hardware thereof.
上述方案中,利用采集得到的第一视点的不可见光图像得到第一视点和第二视点的视差,并利用第二视点的第一彩色图像和该视差得到第二视点的第二彩色图像,进而由第一彩色图像和第二彩色图像形成三维图像,由于该第一视点和第二视点的视差由采集得到的图像数据获得,而无需经过图像处理,因此减少了图像细节信息的流失,以更准确获得两个视点的彩色图像,进而减少了合成的三维图像的失真度,提高了基于二维图像生成的三维显示效果。而且相对于现有的DIBR技术,无需计算得到图像的深度信息,避免了多次重复计算引入的误差,进一步提高了三维显示效果。In the above solution, the disparity image of the first viewpoint and the second viewpoint are obtained by using the acquired invisible image of the first viewpoint, and the second color image of the second viewpoint is obtained by using the first color image of the second viewpoint and the parallax, and further Forming a three-dimensional image from the first color image and the second color image, since the parallax of the first viewpoint and the second viewpoint is obtained from the acquired image data without image processing, thereby reducing the loss of image detail information, thereby Accurately obtaining color images of two viewpoints, thereby reducing the distortion of the synthesized three-dimensional image and improving the three-dimensional display effect generated based on the two-dimensional image. Moreover, compared with the existing DIBR technology, it is not necessary to calculate the depth information of the image, and the error introduced by the repeated calculation is avoided, thereby further improving the three-dimensional display effect.
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。 The above is only the embodiment of the present invention, and is not intended to limit the scope of the invention, and the equivalent structure or equivalent process transformations made by the description of the invention and the drawings are directly or indirectly applied to other related technologies. The fields are all included in the scope of patent protection of the present invention.

Claims (20)

  1. 一种绘制三维图像的方法,其中,包括:A method of drawing a three-dimensional image, comprising:
    分别获取以第一视点对目标进行采集得到的不可见光图像和以第二视点对所述目标进行采集得到的第一彩色图像;Acquiring the invisible light image obtained by acquiring the target with the first viewpoint and the first color image obtained by collecting the target by the second viewpoint respectively;
    由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;Calculating a parallax between the first viewpoint and the second viewpoint from the invisible image;
    按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;Moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first viewpoint;
    由所述第一彩色图像和所述第二彩色图像形成三维图像。A three-dimensional image is formed from the first color image and the second color image.
  2. 根据权利要求1所述的方法,其中,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。The method according to claim 1, wherein the invisible light image is a projected light pattern projected onto the target by the projection module, and the target is collected by an invisible light image collector disposed at the first viewpoint. It is obtained that the first color image is obtained by collecting the target by a color camera disposed at the second viewpoint.
  3. 根据权利要求2所述的方法,其中,所述由所述不可见光图像计算所述第一视点和所述第二视点之间的视差,包括:The method of claim 2, wherein the calculating the disparity between the first viewpoint and the second viewpoint from the invisible light image comprises:
    根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;Calculating a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image according to a matching algorithm of the digital image processing;
    由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。A parallax between the first viewpoint and the second viewpoint is calculated from the displacement, wherein the displacement has a linear relationship with the parallax.
  4. 根据权利要求3所述的方法,其中,所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:The method of claim 3, wherein the calculating the disparity between the first viewpoint and the second viewpoint by the displacement comprises:
    利用下述公式1计算得到第一视点和所述第二视点之间的视差d,Calculating the parallax d between the first viewpoint and the second viewpoint using Equation 1 below,
    Figure PCTCN2017085147-appb-100001
    Figure PCTCN2017085147-appb-100001
    其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。Wherein B 1 is the distance between the invisible image collector and the projection module, B 2 is the distance between the invisible image collector and the color camera; Z 0 is the reference structured light image The depth of the plane relative to the invisible image collector; f is the focal length of the invisible image collector and the color camera, and Δu is the displacement between the invisible image and the pixels of the preset reference structured light image.
  5. 根据权利要求1所述的方法,其中,所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括: The method according to claim 1, wherein the moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first viewpoint comprises:
    根据视差d,建立所述不可见光图像的第一像素坐标Lir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Corresponding relationship between the first pixel coordinate L ir (u ir , v ir ) of the invisible light image and the second pixel coordinate I r (u r , v r ) of the first color image is established according to the parallax d for:
    Iir(uir,vir)=IR(ur+d,vr);I ir (u ir ,v ir )=I R (u r +d,v r );
    将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;Setting a pixel value of the first pixel coordinate of the invisible light image as a pixel value of a second pixel coordinate corresponding to the first pixel coordinate in the first color image to form the target at the first a second color image of the viewpoint;
    对所述第二彩色图像进行平滑、去噪处理。Smoothing and denoising the second color image.
  6. 根据权利要求2所述的方法,其中,还包括:The method of claim 2, further comprising:
    利用所述不可见光图像计算得到所述第一视点的深度图像;Calculating a depth image of the first viewpoint by using the invisible light image;
    利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;Calculating, by using a three-dimensional image transformation theory, a third color image of the target at a first viewpoint according to the depth image of the first viewpoint and the first color image;
    所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:Forming the three-dimensional image by the first color image and the second color image includes:
    将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;And averaging or weighting the pixel values of the corresponding pixels in the second color image and the third color image to obtain a fourth color image of the first view;
    由所述第一彩色图像和所述第四彩色图像形成三维图像。A three-dimensional image is formed by the first color image and the fourth color image.
  7. 根据权利要求2所述的方法,其中,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器为红外相机。The method according to claim 2, wherein the positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and the projection The modules are on the same line; the invisible image is an infrared image, and the invisible image collector is an infrared camera.
  8. 根据权利要求1所述的方法,其中,所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。The method according to claim 1, wherein the color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  9. 一种图像处理设备,其中,包括输入接口、处理器和存储器;An image processing device including an input interface, a processor, and a memory;
    所述输入接口用于获得不可见光图像采集器和彩色相机采集得到的图像;The input interface is configured to obtain an image acquired by an invisible image collector and a color camera;
    所述存储器用于存储计算机程序;The memory is for storing a computer program;
    所述处理器执行所述计算机程序,用于:The processor executes the computer program for:
    通过所述输入接口分别获取以第一视点的所述不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的所述彩色相机对所述目标进行采集得到的第一彩色图像; Obtaining, by the input interface, the invisible light image obtained by acquiring the target by the invisible light image collector of the first viewpoint and the first color image obtained by acquiring the target by the color camera of the second viewpoint ;
    由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;Calculating a parallax between the first viewpoint and the second viewpoint from the invisible image;
    按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;Moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first viewpoint;
    由所述第一彩色图像和所述第二彩色图像形成三维图像。A three-dimensional image is formed from the first color image and the second color image.
  10. 根据权利要求9所述的图像处理设备,其中,所述不可见光图像为在投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。The image processing device according to claim 9, wherein the invisible light image is a projected light pattern projected onto the target by the projection module, and the target is set by the invisible light image collector disposed at the first viewpoint The acquisition is performed, and the first color image is obtained by collecting the target by a color camera disposed at the second viewpoint.
  11. 根据权利要求10所述的图像处理设备,其中,所述处理器具体用于:The image processing device according to claim 10, wherein the processor is specifically configured to:
    根据数字图像处理的匹配算法,计算出包含所述结构光图案的所述不可见光图像与预设的参考结构光图像的各像素之间的位移;Calculating a displacement between the invisible light image including the structured light pattern and each pixel of the preset reference structured light image according to a matching algorithm of the digital image processing;
    由所述位移计算得到第一视点和所述第二视点之间的视差,其中,所述位移与所述视差具有线性关系。A parallax between the first viewpoint and the second viewpoint is calculated from the displacement, wherein the displacement has a linear relationship with the parallax.
  12. 根据权利要求11所述的图像处理设备,其中,所述处理器执行所述由所述位移计算得到第一视点和所述第二视点之间的视差,包括:The image processing device according to claim 11, wherein said processor performs said disparity calculation between said first viewpoint and said second viewpoint by said displacement calculation, comprising:
    利用下述公式1计算得到第一视点和所述第二视点之间的视差d,Calculating the parallax d between the first viewpoint and the second viewpoint using Equation 1 below,
    Figure PCTCN2017085147-appb-100002
    Figure PCTCN2017085147-appb-100002
    其中,B1为所述不可见光图像采集器与所述投影模组之间的距离,B2为所述不可见光图像采集器与彩色相机之间的距离;Z0是所述参考结构光图像所在平面相对于不可见光图像采集器的深度值;f是不可见光图像采集器与彩色相机的像面焦距,Δu是不可见光图像与预设的参考结构光图像的各像素之间的位移。Wherein B 1 is the distance between the invisible image collector and the projection module, B 2 is the distance between the invisible image collector and the color camera; Z 0 is the reference structured light image The depth of the plane relative to the invisible image collector; f is the focal length of the invisible image collector and the color camera, and Δu is the displacement between the invisible image and the pixels of the preset reference structured light image.
  13. 根据权利要求9所述的图像处理设备,其中,所述处理器执行所述按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像,包括:The image processing device according to claim 9, wherein the processor performs the moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first viewpoint, comprising:
    根据视差d,建立所述不可见光图像的第一像素坐标Iir(uir,vir)与所述第一彩色图像的第二像素坐标Ir(ur,vr)之间的对应关系为:Corresponding relationship between the first pixel coordinate I ir (u ir , v ir ) of the invisible light image and the second pixel coordinate I r (u r , v r ) of the first color image is established according to the parallax d for:
    Iir(uir,vir)=Ir(ur+d,vr); I ir (u ir ,v ir )=I r (u r +d,v r );
    将所述不可见光图像的第一像素坐标的像素值设置为所述第一彩色图像中与所述第一像素坐标具有对应关系的第二像素坐标的像素值,以形成所述目标在第一视点的第二彩色图像;Setting a pixel value of the first pixel coordinate of the invisible light image as a pixel value of a second pixel coordinate corresponding to the first pixel coordinate in the first color image to form the target at the first a second color image of the viewpoint;
    对所述第二彩色图像进行平滑、去噪处理。Smoothing and denoising the second color image.
  14. 根据权利要求10所述的图像处理设备,其中,所述处理器还用于:The image processing device according to claim 10, wherein the processor is further configured to:
    利用所述不可见光图像计算得到所述第一视点的深度图像;Calculating a depth image of the first viewpoint by using the invisible light image;
    利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;Calculating, by using a three-dimensional image transformation theory, a third color image of the target at a first viewpoint according to the depth image of the first viewpoint and the first color image;
    所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:Forming the three-dimensional image by the first color image and the second color image includes:
    将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;And averaging or weighting the pixel values of the corresponding pixels in the second color image and the third color image to obtain a fourth color image of the first view;
    由所述第一彩色图像和所述第四彩色图像形成三维图像。A three-dimensional image is formed by the first color image and the fourth color image.
  15. 根据权利要求10所述的图像处理设备,其中,还包括显示屏,用于显示所述三维图像。The image processing device according to claim 10, further comprising a display screen for displaying said three-dimensional image.
  16. 一种三维图像绘制系统,其中,包括投影模组、不可见光图像采集器、彩色相机、与所述不可见光图像采集器和彩色相机连接的图像处理设备;A three-dimensional image drawing system, comprising: a projection module, an invisible image collector, a color camera, an image processing device connected to the invisible image collector and a color camera;
    所述图像处理设备用于:The image processing device is used to:
    分别获取以第一视点的不可见光图像采集器对目标进行采集得到的不可见光图像和以第二视点的彩色相机对所述目标进行采集得到的第一彩色图像;Obtaining, respectively, an invisible light image obtained by acquiring the target by the invisible light image collector of the first viewpoint and a first color image obtained by collecting the target by the color camera of the second viewpoint;
    由所述不可见光图像计算所述第一视点和所述第二视点之间的视差;Calculating a parallax between the first viewpoint and the second viewpoint from the invisible image;
    按照所述视差移动所述第一彩色图像的像素坐标,得到第一视点的第二彩色图像;Moving the pixel coordinates of the first color image according to the parallax to obtain a second color image of the first viewpoint;
    由所述第一彩色图像和所述第二彩色图像形成三维图像。A three-dimensional image is formed from the first color image and the second color image.
  17. 根据权利要求16所述的三维图像绘制系统,其中,所述不可见光图像为在所述投影模组向所述目标投射结构光图案,由设置在所述第一视点的不可见光图像采集器对所述目标进行采集得到,所述第一彩色图像由设置在所述第二视点的彩色相机对所述目标进行采集得到。The three-dimensional image rendering system according to claim 16, wherein the invisible light image is a projected light pattern to the target in the projection module, and the invisible light image collector pair disposed at the first viewpoint The target is acquired, and the first color image is obtained by collecting the target by a color camera disposed at the second viewpoint.
  18. 根据权利要求17所述的三维图像绘制系统,其中,所述图像处理 设备还用于:A three-dimensional image rendering system according to claim 17, wherein said image processing The device is also used to:
    利用所述不可见光图像计算得到所述第一视点的深度图像;Calculating a depth image of the first viewpoint by using the invisible light image;
    利用三维图像变换理论,根据所述第一视点的深度图像和所述第一彩色图像计算得到所述目标在第一视点的第三彩色图像;Calculating, by using a three-dimensional image transformation theory, a third color image of the target at a first viewpoint according to the depth image of the first viewpoint and the first color image;
    所述由所述第一彩色图像和所述第二彩色图像形成三维图像,包括:Forming the three-dimensional image by the first color image and the second color image includes:
    将所述第二彩色图像和所述第三彩色图像中的对应像素的像素值进行平均或者加权平均,得到所述第一视点的第四彩色图像;And averaging or weighting the pixel values of the corresponding pixels in the second color image and the third color image to obtain a fourth color image of the first view;
    由所述第一彩色图像和所述第四彩色图像形成三维图像。A three-dimensional image is formed by the first color image and the fourth color image.
  19. 根据权利要求17所述的三维图像绘制系统,其中,所述第一视点与第二视点之间的位置关系为人体双眼之间的位置关系;所述彩色相机和所述不可见光图像采集器以及所述投影模组处于同一直线上;所述不可见光图像为红外图像,所述不可见光图像采集器为红外相机;和/或The three-dimensional image rendering system according to claim 17, wherein a positional relationship between the first viewpoint and the second viewpoint is a positional relationship between the eyes of the human body; the color camera and the invisible image collector and The projection modules are on the same line; the invisible image is an infrared image, and the invisible image collector is an infrared camera; and/or
    所述彩色相机和所述不可见光图像采集器的图像采集靶面大小相等、分辨率及焦距相同,光轴相互平行。The color camera and the invisible image collector have the same image acquisition target surface size, the same resolution and focal length, and the optical axes are parallel to each other.
  20. 根据权利要求16所述的三维图像绘制系统,其中,还包括与所述图像处理设备连接的显示设备,所述显示设备用于显示所述图像处理设备输出的所述三维图像。 The three-dimensional image drawing system according to claim 16, further comprising a display device connected to said image processing device, said display device for displaying said three-dimensional image output by said image processing device.
    Figure PCTCN2017085147-appb-100003
    Figure PCTCN2017085147-appb-100003
    Figure PCTCN2017085147-appb-100004
    Figure PCTCN2017085147-appb-100004
    Figure PCTCN2017085147-appb-100005
    Figure PCTCN2017085147-appb-100005
    Figure PCTCN2017085147-appb-100006
    Figure PCTCN2017085147-appb-100006
PCT/CN2017/085147 2016-08-19 2017-05-19 Method, device and system for drawing three-dimensional image WO2018032841A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610698004.0 2016-08-19
CN201610698004.0A CN106170086B (en) 2016-08-19 2016-08-19 Method and device thereof, the system of drawing three-dimensional image

Publications (1)

Publication Number Publication Date
WO2018032841A1 true WO2018032841A1 (en) 2018-02-22

Family

ID=57375861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085147 WO2018032841A1 (en) 2016-08-19 2017-05-19 Method, device and system for drawing three-dimensional image

Country Status (2)

Country Link
CN (1) CN106170086B (en)
WO (1) WO2018032841A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170086B (en) * 2016-08-19 2019-03-15 深圳奥比中光科技有限公司 Method and device thereof, the system of drawing three-dimensional image
CN106875435B (en) * 2016-12-14 2021-04-30 奥比中光科技集团股份有限公司 Method and system for obtaining depth image
CN107105217B (en) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN108460368B (en) * 2018-03-30 2021-07-09 百度在线网络技术(北京)有限公司 Three-dimensional image synthesis method and device and computer-readable storage medium
CN113436129B (en) * 2021-08-24 2021-11-16 南京微纳科技研究院有限公司 Image fusion system, method, device, equipment and storage medium
CN114119680B (en) * 2021-09-09 2022-09-20 合肥的卢深视科技有限公司 Image acquisition method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060114320A1 (en) * 2004-11-30 2006-06-01 Honda Motor Co. Ltd. Position detecting apparatus and method of correcting data therein
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
CN104185006A (en) * 2013-05-24 2014-12-03 索尼公司 Imaging apparatus and imaging method
CN104918035A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Method and system for obtaining three-dimensional image of target
CN106170086A (en) * 2016-08-19 2016-11-30 深圳奥比中光科技有限公司 The method of drawing three-dimensional image and device, system
CN106604020A (en) * 2016-11-24 2017-04-26 深圳奥比中光科技有限公司 Special processor used for 3D display
CN106791763A (en) * 2016-11-24 2017-05-31 深圳奥比中光科技有限公司 A kind of application specific processor shown for 3D with 3D interactions

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502372B1 (en) * 2008-11-26 2015-03-16 삼성전자주식회사 Apparatus and method for obtaining an image
CN101662695B (en) * 2009-09-24 2011-06-15 清华大学 Method and device for acquiring virtual viewport
US9406132B2 (en) * 2010-07-16 2016-08-02 Qualcomm Incorporated Vision-based quality metric for three dimensional video
CN102289841B (en) * 2011-08-11 2013-01-16 四川虹微技术有限公司 Method for regulating audience perception depth of three-dimensional image
WO2014002849A1 (en) * 2012-06-29 2014-01-03 富士フイルム株式会社 Three-dimensional measurement method, apparatus, and system, and image processing device
KR101904718B1 (en) * 2012-08-27 2018-10-05 삼성전자주식회사 Apparatus and method for capturing color images and depth images
US10376148B2 (en) * 2012-12-05 2019-08-13 Accuvein, Inc. System and method for laser imaging and ablation of cancer cells using fluorescence
CN103796004B (en) * 2014-02-13 2015-09-30 西安交通大学 A kind of binocular depth cognitive method of initiating structure light
CN103824318B (en) * 2014-02-13 2016-11-23 西安交通大学 A kind of depth perception method of multi-cam array
CN105791662A (en) * 2014-12-22 2016-07-20 联想(北京)有限公司 Electronic device and control method
CN105120257B (en) * 2015-08-18 2017-12-15 宁波盈芯信息科技有限公司 A kind of vertical depth sensing device based on structure light coding

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060114320A1 (en) * 2004-11-30 2006-06-01 Honda Motor Co. Ltd. Position detecting apparatus and method of correcting data therein
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
CN104185006A (en) * 2013-05-24 2014-12-03 索尼公司 Imaging apparatus and imaging method
CN104918035A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Method and system for obtaining three-dimensional image of target
CN106170086A (en) * 2016-08-19 2016-11-30 深圳奥比中光科技有限公司 The method of drawing three-dimensional image and device, system
CN106604020A (en) * 2016-11-24 2017-04-26 深圳奥比中光科技有限公司 Special processor used for 3D display
CN106791763A (en) * 2016-11-24 2017-05-31 深圳奥比中光科技有限公司 A kind of application specific processor shown for 3D with 3D interactions

Also Published As

Publication number Publication date
CN106170086A (en) 2016-11-30
CN106170086B (en) 2019-03-15

Similar Documents

Publication Publication Date Title
US11354840B2 (en) Three dimensional acquisition and rendering
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
JP5887267B2 (en) 3D image interpolation apparatus, 3D imaging apparatus, and 3D image interpolation method
US8736672B2 (en) Algorithmic interaxial reduction
CN103115613B (en) Three-dimensional space positioning method
US9813693B1 (en) Accounting for perspective effects in images
CN106254854B (en) Preparation method, the apparatus and system of 3-D image
TWI591584B (en) Three dimensional sensing method and three dimensional sensing apparatus
US9615081B2 (en) Method and multi-camera portable device for producing stereo images
JP2010113720A (en) Method and apparatus for combining range information with optical image
CN110827392B (en) Monocular image three-dimensional reconstruction method, system and device
WO2009140908A1 (en) Cursor processing method, apparatus and system
TWI788739B (en) 3D display device, 3D image display method
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
KR101853269B1 (en) Apparatus of stitching depth maps for stereo images
JP2013115668A (en) Image processing apparatus, image processing method, and program
TWI820246B (en) Apparatus with disparity estimation, method and computer program product of estimating disparity from a wide angle image
Krutikova et al. Creation of a depth map from stereo images of faces for 3D model reconstruction
KR20190044439A (en) Method of stitching depth maps for stereo images
KR20110025083A (en) Apparatus and method for displaying 3d image in 3d image system
CN106331672B (en) Preparation method, the apparatus and system of visual point image
CN104463958A (en) Three-dimensional super-resolution method based on disparity map fusing
Chu Video stabilization for stereoscopic 3D on 3D mobile devices
KR101358432B1 (en) Display apparatus and method
TWI725620B (en) Omnidirectional stereo vision camera configuration system and camera configuration method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17840817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17840817

Country of ref document: EP

Kind code of ref document: A1