CN113141496A - 3D shooting method and device and 3D display terminal - Google Patents

3D shooting method and device and 3D display terminal Download PDF

Info

Publication number
CN113141496A
CN113141496A CN202010072955.3A CN202010072955A CN113141496A CN 113141496 A CN113141496 A CN 113141496A CN 202010072955 A CN202010072955 A CN 202010072955A CN 113141496 A CN113141496 A CN 113141496A
Authority
CN
China
Prior art keywords
depth
field
information
color
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010072955.3A
Other languages
Chinese (zh)
Inventor
刁鸿浩
黄玲溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vision Technology Venture Capital Pte Ltd
Beijing Ivisual 3D Technology Co Ltd
Original Assignee
Vision Technology Venture Capital Pte Ltd
Beijing Ivisual 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vision Technology Venture Capital Pte Ltd, Beijing Ivisual 3D Technology Co Ltd filed Critical Vision Technology Venture Capital Pte Ltd
Priority to CN202010072955.3A priority Critical patent/CN113141496A/en
Priority to PCT/CN2021/071701 priority patent/WO2021147753A1/en
Priority to TW110101859A priority patent/TW202130168A/en
Publication of CN113141496A publication Critical patent/CN113141496A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to the technical field of 3D, and discloses a 3D shooting method, is applicable to the camera module of degree of depth including two at least cameras of degree of depth to and the color camera module including two at least color cameras, this method includes: at least two depth of field cameras in the depth of field camera module are coordinated to acquire first depth of field information of a shot object, and color images of the shot object, which can be adjusted according to the first depth of field information, are acquired through at least two color cameras in the color camera module. According to the 3D shooting method, at least two depth-of-field cameras in the depth-of-field camera module can be coordinated to adjust the depth of field of the color image acquired by the color camera module, and the depth-of-field accuracy of the color image can be effectively improved. The application also discloses a 3D shooting device and a 3D display terminal.

Description

3D shooting method and device and 3D display terminal
Technical Field
The present application relates to the field of 3D technologies, and for example, to a 3D shooting method and apparatus, and a 3D display terminal.
Background
At present, some terminals are provided with two different types of cameras to acquire depth information of a photographed object for 3D display.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
the accuracy of the depth of field information acquired by only two cameras is low.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a 3D shooting method and device and a 3D display terminal, and aims to solve the technical problem that the accuracy of depth of field information acquired by only two cameras is low.
The 3D shooting method provided by the embodiment of the disclosure is suitable for a depth of field camera module comprising at least two depth of field cameras and a color camera module comprising at least two color cameras, and comprises the following steps:
at least two depth of field cameras in the depth of field camera module are coordinated to acquire first depth of field information of a shot object, and color images of the shot object, which can be adjusted according to the first depth of field information, are acquired through at least two color cameras in the color camera module.
In some embodiments, the second depth information in the color image may also be adjusted according to the first depth information.
In some embodiments, the adjusted color image may also be displayed in 3D.
In some embodiments, adjusting the second depth information in the color image according to the first depth information may include:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information by taking the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information is close to the depth of field of the pixel included in the first depth of field information.
In some embodiments, adjusting the depth of field of the corresponding pixel included in the second depth of field information with reference to the depth of field of the pixel included in the first depth of field information may include:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the depth of field of the pixels other than the corresponding pixel included in the second depth of field information may be adjusted based on the depth of field of the pixel included in the first depth of field information.
In some embodiments, adjusting the depth of field of the pixels other than the corresponding pixel included in the second depth of field information with the depth of field of the pixels included in the first depth of field information as a reference may include:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, coordinating at least two depth of view cameras in the depth of view camera module to acquire the first depth of view information may include:
selecting one depth of field camera in the depth of field camera module to acquire depth of field information of a shot object, and taking the acquired depth of field information of the shot object as first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object, and selecting the depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as first depth-of-field information; or
All the depth-of-field cameras in the depth-of-field camera module are selected to respectively acquire depth-of-field information of a shot object, and the depth-of-field information of the shot object acquired by one depth-of-field camera in all the depth-of-field cameras is selected as first depth-of-field information.
In some embodiments, selecting one of the at least two depth of view cameras may include: selecting one depth of field camera in the best working state from at least two depth of field cameras; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, selecting one of the at least two depth of view cameras may include:
selecting one of all depth of field cameras, comprising: selecting one depth of field camera in the best working state from all the depth of field cameras; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, acquiring color images of a photographic subject by at least two color cameras may include:
acquiring a first color image through a first color camera and acquiring a second color image through a second color camera;
and synthesizing the first color image and the second color image into a color synthetic image containing second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
In some embodiments, the color composite image may include a left image half, a right image half;
the left half image may be a color image, and the right half image may be a depth image.
The 3D shooting device provided by the embodiment of the disclosure comprises a processor and a memory stored with program instructions; the processor is configured to, upon execution of the program instructions, perform the 3D photographing method described above.
The 3D shooting device that this disclosed embodiment provided includes:
the depth-of-field camera module comprises at least two depth-of-field cameras and is configured to acquire first depth-of-field information of a shot object by coordinating the at least two depth-of-field cameras;
and the color camera module comprises at least two color cameras and is configured to acquire a color image of the shooting object which can be adjusted according to the first depth information.
In some embodiments, it may further include: an image processor configured to adjust second depth information in the color image according to the first depth information.
In some embodiments, the image processor may be further configured to: and 3D displaying the adjusted color image.
In some embodiments, the image processor may be configured to:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information by taking the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information is close to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the image processor may be configured to:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the image processor may be further configured to: and adjusting the depth of field of the pixels except the corresponding pixels in the second depth of field information by taking the depth of field of the pixels in the first depth of field information as a reference.
In some embodiments, the image processor may be configured to:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the depth of field camera module may be configured to:
selecting one depth of field camera in the depth of field camera module to acquire depth of field information of a shot object, and taking the acquired depth of field information of the shot object as first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object, and selecting the depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as first depth-of-field information; or
All the depth-of-field cameras in the depth-of-field camera module are selected to respectively acquire depth-of-field information of a shot object, and the depth-of-field information of the shot object acquired by one depth-of-field camera in all the depth-of-field cameras is selected as first depth-of-field information.
In some embodiments, the depth of field camera module may be configured to:
under the condition that one depth of field camera of the at least two depth of field cameras is selected, one depth of field camera in the best working state of the at least two depth of field cameras is selected; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information;
or the like, or, alternatively,
under the condition that one depth of field camera in all depth of field cameras is selected, one depth of field camera in the best working state in all depth of field cameras is selected; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, at least one depth of field camera of the depth of field camera module may be a structured light camera or a time of flight (TOF) camera.
In some embodiments, at least one depth of field camera of the depth of field camera module may be a TOF camera, which may be located between two color cameras of the color camera module.
In some embodiments, the color camera module may include:
a first color camera configured to acquire a first color image;
a second color camera configured to acquire a second color image;
optionally, an image processor may be configured to:
and synthesizing the first color image and the second color image into a color synthetic image containing second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
In some embodiments, at least two color cameras in the color camera module may employ optical lenses and sensor chips with the same performance index.
The 3D display terminal provided by the embodiment of the disclosure comprises the 3D shooting device.
The 3D shooting method, the device and the 3D display terminal provided by the embodiment of the disclosure can realize the following technical effects:
at least two depth of field cameras in the depth of field camera module can be coordinated to adjust the depth of field of the color image acquired by the color camera module, and the depth of field accuracy of the color image can be effectively improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
fig. 1 is a flowchart of a 3D shooting method provided by an embodiment of the present disclosure;
fig. 2A, fig. 2B, and fig. 2C are flow charts of another 3D shooting method provided by the embodiment of the disclosure, respectively;
fig. 3 is a flowchart of another 3D photographing method provided by an embodiment of the present disclosure;
fig. 4 is a structural diagram of a 3D photographing apparatus provided in an embodiment of the present disclosure;
fig. 5 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 6 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 7 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 8 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 9 is a device structure diagram of a 3D display terminal provided in an embodiment of the present disclosure.
Reference numerals:
300: a 3D camera; 310: a processor; 320: a memory; 330: a communication interface; 340: a bus; 410: a depth-of-field camera module; 411: a first depth of field camera; 412: a second depth of field camera; 420: a color camera module; 421: a first color camera; 422: a second color camera; 430: an image processor; 500: and 3D display terminal.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
Referring to fig. 1, the present disclosure provides a 3D shooting method, which is applicable to a depth-of-field camera module including at least two depth-of-field cameras, and a color camera module including at least two color cameras, and the method includes:
step 110: coordinating at least two depth-of-field cameras in the depth-of-field camera module to acquire first depth-of-field information of a shot object;
step 120: and acquiring a color image of the shot object which can be adjusted according to the first depth of field information through at least two color cameras in the color camera module.
In some embodiments, the 3D photographing method may further include: and adjusting second depth information in the color image according to the first depth information.
In some embodiments, the adjusted color image may also be displayed in 3D. The feasible 3D display modes are various and are not described herein any more, as long as 3D display can be smoothly achieved on the color image after the depth of field adjustment.
In some embodiments, adjusting the second depth information in the color image according to the first depth information may include:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information to make the depth of field of the corresponding pixel included in the second depth of field information approach the depth of field of the pixel included in the first depth of field information, so as to reduce the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information.
In comparison, the color images obtained by the at least two color cameras have high resolution and low depth of field accuracy, and the first depth of field information (which can be presented in the form of depth of field images) obtained by the depth of field cameras has low resolution and high depth of field accuracy. Therefore, the depth of field of the corresponding pixel included in the second depth of field information can be adjusted with the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information can be brought closer to the depth of field of the pixel included in the first depth of field information, a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information can be reduced, and accuracy of the depth of field of the corresponding pixel included in the second depth of field information can be effectively improved.
In some embodiments, the sizes of the depth image and the color image may be unified before adjusting the depth of field of the corresponding pixel included in the second depth information with reference to the depth of field of the pixel included in the first depth information (depth image); then, feature value grabbing and matching are carried out on the depth of field image and the color image based on a field of view (FOV) between the depth of field camera and the color camera, so that pixels in the depth of field image correspond to corresponding pixels in the color image in pixel units; thus, the depth of field of the pixel in the depth image can be compared with the depth of field of the corresponding pixel in the color image, and the depth of field can be adjusted according to the comparison result.
In some embodiments, adjusting the depth of field of the corresponding pixel included in the second depth of field information with reference to the depth of field of the pixel included in the first depth of field information may include:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the depth of field of the corresponding pixel included in the second depth information may be adjusted to the depth of field of the pixel included in the first depth information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth information and the depth of field of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80% and other numerical values of 5cm according to an actual situation or an operation manner such as a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, the depth of field of the corresponding pixel included in the second depth of field information may also be directly adjusted to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information can be directly adjusted to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation mode such as a preset strategy.
When the depth of field is adjusted, because the resolution of the first depth of field information acquired by the depth of field camera is low, all pixels in the depth of field image may only correspond to some pixels in the color synthesized image, and the depth of field of some or all pixels other than the corresponding pixels included in the second depth of field information may not be effectively adjusted. In this case, in some embodiments, the 3D photographing method may further include: and adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference so as to effectively adjust the depth of field of the pixels except the corresponding pixels included in the second depth of field information and effectively improve the accuracy of the depth of field.
In some embodiments, adjusting the depth of field of the pixels other than the corresponding pixel included in the second depth of field information with the depth of field of the pixels included in the first depth of field information as a reference may include:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the preset area may be set according to an actual situation or an operation manner such as a preset policy. Alternatively, the preset area may include a single corresponding pixel in the second depth information and non-corresponding pixels around the single corresponding pixel (i.e., pixels in the second depth information that do not correspond to the pixel in the first depth information), such as: the preset area may be a circular area formed by taking the single corresponding pixel as a center and taking other values such as half of the distance between the single corresponding pixel and another adjacent single corresponding pixel as radii. Optionally, there may be no overlap between different preset regions to avoid possible pixel adjustment conflicts.
Optionally, the preset area may also include at least two corresponding pixels in the second depth information and non-corresponding pixels around the two corresponding pixels, for example: the preset area may be a circular area formed by taking a larger value, such as a half of the distance between the two corresponding pixels, as a radius, and taking a center point of the two corresponding pixels as a center of a circle when the depth of field adjustment amounts of the at least two corresponding pixels are the same. Alternatively, there may be an overlap between different preset regions, as long as possible pixel adjustment conflicts can be avoided.
Optionally, the size and shape of the preset area may also be different according to the actual situation or the operation manner such as the preset policy, for example: the size of the preset area can be enlarged or reduced in proportion, and the shape of the preset area can be an ellipse, a polygon and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80% and other numerical values of 5cm according to an actual situation or an operation manner such as a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, when the depth of field adjustment is performed in the preset area, the depth of field of the corresponding pixel included in the second depth of field information may also be directly adjusted to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information can be directly adjusted to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation mode such as a preset strategy.
Referring to fig. 2A, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 201: selecting one depth of field camera in the depth of field camera module to acquire depth of field information of a shot object;
step 202: and taking the acquired depth information of the shooting object as first depth information.
Referring to fig. 2B, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 211: selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object;
step 212: the depth of field information of a shooting object acquired by one of the at least two depth of field cameras is selected as first depth of field information.
Referring to fig. 2C, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 221: selecting all depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object;
step 222: the depth of field information of the photographic subject acquired by one of all the depth of field cameras is selected as first depth of field information.
In some embodiments, selecting one of the at least two depth of view cameras may include: selecting one depth of field camera in the best working state from at least two depth of field cameras; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, selecting one of all depth of view cameras comprises: selecting one depth of field camera in the best working state from all the depth of field cameras; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, whether selecting between two depth of field cameras or three or more depth of field cameras, the optimal depth of field camera may be selected based on the operating state, accuracy, etc. of the depth of field camera. Optionally, the working state of the depth-of-field camera may include a working temperature, a working load, and the like of the depth-of-field camera; the accuracy of the depth of view camera may include a factory set accuracy of the depth of view camera, or a difference between an actual accuracy and a factory set accuracy (the smaller the difference, the higher the accuracy of the depth of view camera is represented), or the like.
In some embodiments, at least one depth Of field camera in the depth Of field camera module may be a structured light camera or a Time Of Flight (TOF) camera, and may be capable Of acquiring first depth Of field information Of a photographic subject including a depth Of field Of pixels. Alternatively, the acquired first depth information may be presented in the form of a depth image.
Referring to fig. 3, in some embodiments, acquiring color images of a photographic subject by at least two color cameras may include:
step 231: acquiring a first color image through a first color camera and acquiring a second color image through a second color camera;
step 232: and synthesizing the first color image and the second color image into a color synthetic image containing second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
In some embodiments, the first color camera and the second color camera may be the same color camera. Alternatively, the first color camera and the second color camera may be different color cameras. In this case, in order to smoothly synthesize the color composite image, the first color image and the second color image may be subjected to processing such as alignment and correction.
In some embodiments, a color composite image of the photographic subject may also be acquired by at least two color cameras in other possible ways than that shown in fig. 3. Alternatively, the color composite image may be acquired based on parameters other than the pitch and the shooting angle. Optionally, more than two color cameras may be used to obtain the color composite image, for example: three or more color cameras as long as a color composite image can be successfully synthesized.
In some embodiments, the color composite image may include a left image half, a right image half; the left half image may be a color image, and the right half image may be a depth image.
The disclosed embodiment provides a 3D camera 300 comprising a processor and a memory storing program instructions; the processor is configured to, upon execution of the program instructions, perform the 3D photographing method described above.
In some embodiments, the 3D camera 300 as shown in fig. 4 includes:
a processor (processor)310 and a memory (memory)320, and may further include a Communication Interface 330 and a bus 340. The processor 310, the communication interface 330 and the memory 320 may communicate with each other through a bus 340. Communication interface 330 may be used for information transfer. The processor 310 may call logic instructions in the memory 320 to perform the 3D photographing method of the above-described embodiment.
In addition, the logic instructions in the memory 320 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 320 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 310 executes functional applications and data processing, i.e., implements the 3D photographing method in the above-described method embodiments, by executing program instructions/modules stored in the memory 320.
The memory 320 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, memory 320 may include high speed random access memory and may also include non-volatile memory.
Referring to fig. 5, an embodiment of the present disclosure provides a 3D photographing apparatus 300 including:
the depth of field camera module 410 comprises at least two depth of field cameras and is configured to acquire first depth of field information of a shot object by coordinating the at least two depth of field cameras;
a color camera module 420 including at least two color cameras configured to acquire a color image of a photographic subject that is adjustable according to the first depth information;
the at least two color cameras can adopt optical lenses and sensor chips with the same performance index.
In some embodiments, the depth camera module 410 may communicate with the color camera module 420 to transceive captured or processed images.
Referring to fig. 6, in some embodiments, the 3D camera 300 may further include an image processor 430 configured to adjust the second depth information in the color image according to the first depth information.
In some embodiments, the image processor 430 may be further configured to: and 3D displaying the adjusted color image. The feasible 3D display modes are various and are not described herein again, as long as the image processor 430 can smoothly implement 3D display on the color image after the depth of field adjustment.
In some embodiments, the image processor 430 may be configured to:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information to make the depth of field of the corresponding pixel included in the second depth of field information approach the depth of field of the pixel included in the first depth of field information, so as to reduce the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information.
In comparison, the color images obtained by the at least two color cameras have high resolution and low depth of field accuracy, and the first depth of field information (which can be presented in the form of depth of field images) obtained by the depth of field cameras has low resolution and high depth of field accuracy. Therefore, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information, so that the depth of field of the corresponding pixel included in the second depth of field information may be closer to the depth of field of the pixel included in the first depth of field information, so as to reduce a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information, and effectively improve accuracy of the depth of field of the corresponding pixel included in the second depth of field information.
In some embodiments, the image processor 430 may unify the sizes of the depth image and the color image before adjusting the depth of field of the corresponding pixel included in the second depth information based on the depth of field of the pixel included in the first depth information (depth image); then, performing characteristic value grabbing and matching on the depth of field image and the color image based on the FOV between the depth of field camera and the color camera so as to correspond the pixels in the depth of field image to the corresponding pixels in the color image by taking the pixels as units; thus, the depth of field of the pixel in the depth image can be compared with the depth of field of the corresponding pixel in the color image, and the depth of field can be adjusted according to the comparison result.
In some embodiments, the image processor 430 may be configured to:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth information to the depth of field of the pixel included in the first depth information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth information and the depth of field of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80%, and other numerical values of 5cm according to an operation manner such as an actual situation or a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, the image processor 430 may also directly adjust the depth of the corresponding pixel included in the second depth information to the depth of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation manner such as a preset policy.
When the depth of field is adjusted, because the resolution of the first depth of field information acquired by the depth of field camera is low, all pixels in the depth of field image may only correspond to part of pixels in the color image, and the depth of field of part or all pixels other than the corresponding pixels included in the second depth of field information may not be effectively adjusted. In this case, in some embodiments, the image processor 430 may be further configured to: and adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference so as to effectively adjust the depth of field of the pixels except the corresponding pixels included in the second depth of field information and effectively improve the accuracy of the depth of field.
In some embodiments, the image processor 430 may be configured to:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the preset area may be set according to an actual situation or an operation manner such as a preset policy. Alternatively, the preset area may include a single corresponding pixel in the second depth information and non-corresponding pixels around the single corresponding pixel (i.e., pixels in the second depth information that do not correspond to the pixel in the first depth information), such as: the preset area may be a circular area formed by taking the single corresponding pixel as a center and taking other values such as half of the distance between the single corresponding pixel and another adjacent single corresponding pixel as radii. Optionally, there may be no overlap between different preset regions to avoid possible pixel adjustment conflicts.
Optionally, the preset area may also include at least two corresponding pixels in the second depth information and non-corresponding pixels around the two corresponding pixels, for example: the preset area may be a circular area formed by taking a larger value, such as a half of the distance between the two corresponding pixels, as a radius, and taking a center point of the two corresponding pixels as a center of a circle when the depth of field adjustment amounts of the at least two corresponding pixels are the same. Alternatively, there may be an overlap between different preset regions, as long as possible pixel adjustment conflicts can be avoided.
Optionally, the size and shape of the preset area may also be different according to the actual situation or the operation manner such as the preset policy, for example: the size of the preset area can be enlarged or reduced in proportion, and the shape of the preset area can be an ellipse, a polygon and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80%, and other numerical values of 5cm according to an operation manner such as an actual situation or a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the image processor 430 may also directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation manner such as a preset policy.
In some embodiments, the depth of view camera module 410 may be configured to:
selecting one depth of field camera in the depth of field camera module 410 to acquire depth of field information of a shot object, and taking the acquired depth of field information of the shot object as first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module 410 to respectively acquire depth-of-field information of a shot object, and selecting depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as first depth-of-field information; or
All the depth-of-field cameras in the depth-of-field camera module 410 are selected to respectively acquire depth-of-field information of a shooting object, and the depth-of-field information of the shooting object acquired by one of the depth-of-field cameras is selected as first depth-of-field information.
In some embodiments, the depth of view camera module 410 may be configured to:
under the condition that one depth of field camera of the at least two depth of field cameras is selected, one depth of field camera in the best working state of the at least two depth of field cameras is selected; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information;
or the like, or, alternatively,
under the condition that one depth of field camera in all depth of field cameras is selected, one depth of field camera in the best working state in all depth of field cameras is selected; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
Referring to fig. 7, in some embodiments, the depth of view camera module 410 may include:
a first depth camera 411 configured to acquire depth information of a photographic subject;
and a second depth camera 412 configured to acquire depth information of the photographic subject.
In some embodiments, the first depth of view camera 411 and the second depth of view camera 412 may be the same depth of view camera. Alternatively, the first depth of field camera 411 and the second depth of field camera 412 may be different depth of field cameras.
In some embodiments, the depth of view camera module 410 may also include more than two depth of view cameras.
In some embodiments, in addition to the depth of view camera, the depth of view camera module 410 may further include a controller capable of controlling the depth of view camera to effectively control the operation of the depth of view camera.
In some embodiments, whether selecting between two depth of field cameras or three or more depth of field cameras, the optimal depth of field camera may be selected based on the operating state, accuracy, etc. of the depth of field camera. Optionally, the working state of the depth-of-field camera may include a working temperature, a working load, and the like of the depth-of-field camera; the accuracy of the depth of view camera may include a factory set accuracy of the depth of view camera, or a difference between an actual accuracy and a factory set accuracy (the smaller the difference, the higher the accuracy of the depth of view camera is represented), or the like.
In some embodiments, at least one depth of field camera in the depth of field camera module 410 may be a structured light camera or a TOF camera, and may be capable of acquiring first depth of field information of a photographic subject including a depth of field of pixels. Alternatively, the acquired first depth information may be presented in the form of a depth image.
In some embodiments, at least one depth of view camera in the depth of view camera module 410 may be a TOF camera, which may be located between two color cameras in the color camera module 420, or at other locations around the color cameras. Optionally, the depth-of-field cameras in the depth-of-field camera module 410 may also be arranged in alignment with the same number of color cameras in the color camera module 420; for example: the two depth of view cameras in the depth of view camera module 410 may be aligned with the two color cameras in the color camera module 420.
Referring to fig. 8, in some embodiments, the color camera module 420 may include:
a first color camera 421 configured to acquire a first color image;
a second color camera 422 configured to acquire a second color image;
optionally, the image processor 430 may be configured to:
and synthesizing the first color image and the second color image into a color synthesized image containing second depth information according to the distance between the first color camera 421 and the second color camera 422 and the shooting angle.
In some embodiments, the first color camera 421 and the second color camera 422 may be the same color camera. Alternatively, the first color camera 421 and the second color camera 422 may be different color cameras. In this case, in order to smoothly synthesize the color composite image, the first color image and the second color image may be subjected to processing such as alignment and correction.
In some embodiments, the color camera module 420 may also obtain a color composite image of the photographic subject through at least two color cameras in other possible ways than shown in fig. 6. Alternatively, the color camera module 420 may acquire the color composite image based on parameters other than the pitch and the photographing angle. Optionally, more than two color cameras may be used when the color camera module 420 obtains the color composite image, for example: three or more color cameras as long as a color composite image can be successfully synthesized.
In some embodiments, the color camera module 420 may further include a controller capable of controlling the color cameras, in addition to the color cameras, to effectively control the operation of the color cameras and to smoothly realize the composition of the color composite image.
In some embodiments, the image processor 430 may be a 3D image processor based on a high speed computing chip such as a CPU, Field Programmable Gate Array (FPGA), or Application Specific Integrated Circuit (ASIC). Alternatively, the 3D image processor may be presented in the form of a chip, a single chip, or the like.
Referring to fig. 9, the present disclosure provides a 3D display terminal 500, which includes the 3D photographing device 300 composed of the depth-of-field camera module 410 and the color camera module 420. Optionally, the 3D display terminal 500 may further include an image processor 430.
In some embodiments, the 3D display terminal 500 may further include means for supporting the normal operation of the 3D display terminal 500, such as: at least one of light guide plate, polarizer, glass substrate, liquid crystal layer, and filter.
In some embodiments, the 3D display terminal 500 may be provided in a 3D display. Optionally, the 3D display may further comprise means for supporting the normal functioning of the 3D display, such as: at least one of the components of the backlight module, the main board, the back board and the like.
The 3D shooting method, the device and the 3D display terminal provided by the embodiments of the present disclosure can coordinate at least two depth-of-field cameras in the depth-of-field camera module to perform depth-of-field adjustment on the color image acquired by the color camera module, and can effectively improve the depth-of-field accuracy of the color image.
The disclosed embodiments also provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-mentioned 3D photographing method.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the above-mentioned 3D photographing method.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The computer-readable storage medium and the computer program product provided by the embodiments of the present disclosure can coordinate at least two depth-of-field cameras in the depth-of-field camera module to perform depth-of-field adjustment on a color image acquired by the color camera module, and can effectively improve the depth-of-field accuracy of the color image.
In some embodiments, the 3D techniques described above may include naked-eye 3D techniques, i.e.: the 3D shooting device and the 3D display terminal can realize functions related to naked eye 3D, such as: shooting and displaying of naked eye 3D images and the like.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes one or more instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of the disclosed embodiments includes the full ambit of the claims, as well as all available equivalents of the claims. As used in this application, although the terms "first," "second," etc. may be used in this application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, unless the meaning of the description changes, so long as all occurrences of the "first element" are renamed consistently and all occurrences of the "second element" are renamed consistently. The first and second elements are both elements, but may not be the same element. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one" does not exclude the presence of other like elements in a process, method or device that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be merely a division of a logical function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (26)

1. The utility model provides a 3D shooting method, is applicable to the degree of depth camera module that includes two at least degree of depth cameras to and the color camera module that includes two at least color cameras, its characterized in that, the method includes:
at least two depth of field cameras in the depth of field camera module are coordinated to obtain first depth of field information of a shot object, and a color image of the shot object, which can be adjusted according to the first depth of field information, is obtained through at least two color cameras in the color camera module.
2. The method of claim 1, further comprising: and adjusting second depth information in the color image according to the first depth information.
3. The method of claim 2, further comprising: and performing 3D display on the adjusted color image.
4. The method of claim 2, wherein adjusting second depth information in the color image according to the first depth information comprises:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information by taking the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information is close to the depth of field of the pixel included in the first depth of field information.
5. The method according to claim 4, wherein adjusting the depth of field of the corresponding pixel included in the second depth information with reference to the depth of field of the pixel included in the first depth information comprises:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
6. The method of claim 4 or 5, further comprising: and adjusting the depth of field of the pixels except for the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference.
7. The method according to claim 6, wherein adjusting the depth of field of the pixels other than the corresponding pixel included in the second depth of field information with reference to the depth of field of the pixel included in the first depth of field information comprises:
in a preset area, adjusting the depth of field of the pixels other than the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in a preset area, adjusting the depth of field of the pixels other than the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information.
8. The method according to any one of claims 1 to 7, wherein coordinating at least two depth cameras in the depth camera module to acquire the first depth information comprises:
selecting one depth of field camera in the depth of field camera module to acquire depth of field information of the shot object, and taking the acquired depth of field information of the shot object as the first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of the shot object, and selecting the depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as the first depth-of-field information; or
And selecting all the depth-of-field cameras in the depth-of-field camera module to respectively acquire the depth-of-field information of the shot object, and selecting the depth-of-field information of the shot object acquired by one of the depth-of-field cameras as the first depth-of-field information.
9. The method of claim 8,
selecting one of the at least two depth of field cameras, comprising: selecting one depth of field camera in the best working state from the at least two depth of field cameras; or, selecting one of the at least two depth-of-field cameras with the highest accuracy of obtaining the depth-of-field information;
or the like, or, alternatively,
selecting one of the all depth of field cameras, comprising: selecting one depth of field camera in the best working state from all the depth of field cameras; or selecting one of the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
10. The method according to any one of claims 1 to 9, wherein the acquiring color images of the photographic subject by at least two color cameras comprises:
acquiring a first color image through a first color camera and acquiring a second color image through a second color camera;
and synthesizing the first color image and the second color image into a color synthesized image containing the second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
11. The method of claim 10, wherein the color composite image comprises a left image half, a right image half;
the left half image is a color image, and the right half image is a depth image.
12. A 3D camera comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the method according to any of claims 1 to 11 when executing the program instructions.
13. A 3D photographing apparatus comprising:
the depth of field camera module comprises at least two depth of field cameras and is configured to acquire first depth of field information of a shot object by coordinating the at least two depth of field cameras;
and the color camera module comprises at least two color cameras and is configured to acquire a color image of the shooting object which can be adjusted according to the first depth information.
14. The apparatus of claim 13, further comprising: an image processor configured to adjust second depth information in the color image according to the first depth information.
15. The apparatus of claim 14, wherein the image processor is further configured to: and performing 3D display on the adjusted color image.
16. The apparatus of claim 14, wherein the image processor is configured to:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information by taking the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information is close to the depth of field of the pixel included in the first depth of field information.
17. The apparatus of claim 16, wherein the image processor is configured to:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
18. The apparatus of claim 16 or 17, wherein the image processor is further configured to: and adjusting the depth of field of the pixels except for the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference.
19. The apparatus of claim 18, wherein the image processor is configured to:
in a preset area, adjusting the depth of field of the pixels other than the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in a preset area, adjusting the depth of field of the pixels other than the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information.
20. The apparatus of any one of claims 13 to 19, wherein the depth camera module is configured to:
selecting one depth of field camera in the depth of field camera module to acquire depth of field information of the shot object, and taking the acquired depth of field information of the shot object as the first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of the shot object, and selecting the depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as the first depth-of-field information; or
And selecting all the depth-of-field cameras in the depth-of-field camera module to respectively acquire the depth-of-field information of the shot object, and selecting the depth-of-field information of the shot object acquired by one of the depth-of-field cameras as the first depth-of-field information.
21. The apparatus of claim 20, wherein the depth camera module is configured to:
under the condition that one depth of field camera in the at least two depth of field cameras is selected, one depth of field camera in the best working state in the at least two depth of field cameras is selected; or, selecting one of the at least two depth-of-field cameras with the highest accuracy of obtaining the depth-of-field information;
or the like, or, alternatively,
under the condition that one depth of field camera in all the depth of field cameras is selected, one depth of field camera in the best working state in all the depth of field cameras is selected; or selecting one of the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
22. The apparatus of claim 20, wherein at least one of the depth of field camera modules is a structured light camera or a time of flight (TOF) camera.
23. The apparatus of claim 22, wherein at least one of the depth of view cameras in the depth of view camera module is a TOF camera, the TOF camera being located between two of the color cameras in the color camera module.
24. The apparatus of any one of claims 13 to 23,
the color camera module comprises:
a first color camera configured to acquire a first color image;
a second color camera configured to acquire a second color image;
the image processor configured to:
and synthesizing the first color image and the second color image into a color synthesized image containing the second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
25. The apparatus of claim 13, wherein at least two color cameras in the color camera module employ optical lenses and sensor chips with the same performance index.
26. A 3D display terminal, characterized in that it comprises an apparatus according to any of claims 12 or 13 to 25.
CN202010072955.3A 2020-01-20 2020-01-20 3D shooting method and device and 3D display terminal Pending CN113141496A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010072955.3A CN113141496A (en) 2020-01-20 2020-01-20 3D shooting method and device and 3D display terminal
PCT/CN2021/071701 WO2021147753A1 (en) 2020-01-20 2021-01-14 3d photographing method and apparatus, and 3d display terminal
TW110101859A TW202130168A (en) 2020-01-20 2021-01-18 3D photographing method and apparatus, and 3D display terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010072955.3A CN113141496A (en) 2020-01-20 2020-01-20 3D shooting method and device and 3D display terminal

Publications (1)

Publication Number Publication Date
CN113141496A true CN113141496A (en) 2021-07-20

Family

ID=76809212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010072955.3A Pending CN113141496A (en) 2020-01-20 2020-01-20 3D shooting method and device and 3D display terminal

Country Status (3)

Country Link
CN (1) CN113141496A (en)
TW (1) TW202130168A (en)
WO (1) WO2021147753A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668219B (en) * 2008-09-02 2012-05-23 华为终端有限公司 Communication method, transmitting equipment and system for 3D video
EP2382791B1 (en) * 2009-01-27 2014-12-17 Telefonaktiebolaget L M Ericsson (PUBL) Depth and video co-processing
US10404969B2 (en) * 2015-01-20 2019-09-03 Qualcomm Incorporated Method and apparatus for multiple technology depth map acquisition and fusion
CN107666606B (en) * 2016-07-29 2019-07-12 东南大学 Binocular panoramic picture acquisition methods and device
CN107635129B (en) * 2017-09-29 2020-06-16 上海安威士科技股份有限公司 Three-dimensional trinocular camera device and depth fusion method

Also Published As

Publication number Publication date
WO2021147753A1 (en) 2021-07-29
TW202130168A (en) 2021-08-01

Similar Documents

Publication Publication Date Title
KR101991754B1 (en) Image processing method and apparatus, and electronic device
CN109906599B (en) Terminal photographing method and terminal
CN106664357B (en) Imaging device, imaging-display device and its control method
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN109903321A (en) Image processing method, image processing apparatus and storage medium
US20220182582A1 (en) Image processing method and apparatus, device and storage medium
CN112053314B (en) Image fusion method, device, computer equipment, medium and thermal infrared imager
KR101714213B1 (en) Apparatus for revising image distortion of lens
CN108600644B (en) Photographing method and device and wearable device
CN109725701B (en) Display panel and device, image processing method and device, and virtual reality system
CN112470192A (en) Dual-camera calibration method, electronic device and computer-readable storage medium
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
KR20220073824A (en) Image processing method, image processing apparatus, and electronic device applying the same
CN111279393A (en) Camera calibration method, device, equipment and storage medium
CN212628181U (en) 3D shooting device and 3D display terminal
CN109697737B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN113141496A (en) 3D shooting method and device and 3D display terminal
CN107203961A (en) A kind of method and electronic equipment of migration of expressing one's feelings
CN112584121A (en) 3D shooting method and device and 3D display terminal
CN104754316A (en) 3D imaging method and device and imaging system
CN107621743B (en) Projection system and method for correcting distortion of projected image
CN114979614A (en) Display mode determining method and display mode determining device
CN211296858U (en) 3D shooting device and 3D display terminal
CN113888435A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN109379521A (en) Camera calibration method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination