CN212628181U - 3D shooting device and 3D display terminal - Google Patents

3D shooting device and 3D display terminal Download PDF

Info

Publication number
CN212628181U
CN212628181U CN202020135250.7U CN202020135250U CN212628181U CN 212628181 U CN212628181 U CN 212628181U CN 202020135250 U CN202020135250 U CN 202020135250U CN 212628181 U CN212628181 U CN 212628181U
Authority
CN
China
Prior art keywords
depth
field
color
camera
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202020135250.7U
Other languages
Chinese (zh)
Inventor
刁鸿浩
黄玲溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ivisual 3D Technology Co Ltd
Original Assignee
Vision Technology Venture Capital Pte Ltd
Beijing Ivisual 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vision Technology Venture Capital Pte Ltd, Beijing Ivisual 3D Technology Co Ltd filed Critical Vision Technology Venture Capital Pte Ltd
Priority to CN202020135250.7U priority Critical patent/CN212628181U/en
Application granted granted Critical
Publication of CN212628181U publication Critical patent/CN212628181U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to the technical field of 3D, discloses a 3D shoots device, includes: the depth-of-field camera module comprises at least two depth-of-field cameras and is configured to acquire first depth-of-field information of a shot object by coordinating the at least two depth-of-field cameras; and the color camera module comprises at least two color cameras and is configured to acquire a color image of the shooting object which can be adjusted according to the first depth information. The 3D shooting device can coordinate at least two depth-of-field cameras in the depth-of-field camera module to adjust the depth of field of the color image acquired by the color camera module, and can effectively improve the depth-of-field accuracy of the color image. The application also discloses a 3D display terminal.

Description

3D shooting device and 3D display terminal
Technical Field
The present application relates to the field of 3D technologies, and for example, to a 3D imaging device and a 3D display terminal.
Background
At present, some terminals are provided with two different types of cameras to acquire depth information of a photographed object for 3D display.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
the accuracy of the depth of field information acquired by only two cameras is low.
SUMMERY OF THE UTILITY MODEL
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a 3D shooting device and a 3D display terminal, so as to solve the technical problem that the accuracy of depth of field information acquired only through two cameras is low.
The 3D shooting device that this disclosed embodiment provided includes:
the depth-of-field camera module comprises at least two depth-of-field cameras and is configured to acquire first depth-of-field information of a shot object by coordinating the at least two depth-of-field cameras;
and the color camera module comprises at least two color cameras and is configured to acquire a color image of the shooting object which can be adjusted according to the first depth information.
In some embodiments, it may further include: an image processor configured to adjust second depth information in the color image according to the first depth information.
In some embodiments, the image processor may be further configured to: and 3D displaying the adjusted color image.
In some embodiments, the image processor may be configured to:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information by taking the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information is close to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the image processor may be configured to:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the image processor may be further configured to: and adjusting the depth of field of the pixels except the corresponding pixels in the second depth of field information by taking the depth of field of the pixels in the first depth of field information as a reference.
In some embodiments, the image processor may be configured to:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the depth of field camera module may be configured to:
selecting one depth of field camera in the depth of field camera module to acquire depth of field information of a shot object, and taking the acquired depth of field information of the shot object as first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object, and selecting the depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as first depth-of-field information; or
All the depth-of-field cameras in the depth-of-field camera module are selected to respectively acquire depth-of-field information of a shot object, and the depth-of-field information of the shot object acquired by one depth-of-field camera in all the depth-of-field cameras is selected as first depth-of-field information.
In some embodiments, the depth of field camera module may be configured to:
under the condition that one depth of field camera of the at least two depth of field cameras is selected, one depth of field camera in the best working state of the at least two depth of field cameras is selected; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information;
or the like, or, alternatively,
under the condition that one depth of field camera in all depth of field cameras is selected, one depth of field camera in the best working state in all depth of field cameras is selected; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, at least one depth of field camera of the depth of field camera module may be a structured light camera or a time of flight (TOF) camera.
In some embodiments, at least one depth of field camera of the depth of field camera module may be a TOF camera, which may be located between two color cameras of the color camera module.
In some embodiments, the color camera module may include:
a first color camera configured to acquire a first color image;
a second color camera configured to acquire a second color image;
optionally, an image processor may be configured to:
and synthesizing the first color image and the second color image into a color synthetic image containing second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
In some embodiments, at least two color cameras in the color camera module may employ optical lenses and sensor chips with the same performance index.
The 3D display terminal provided by the embodiment of the disclosure comprises the 3D shooting device.
The 3D shooting device and the 3D display terminal provided by the embodiment of the disclosure can realize the following technical effects:
at least two depth of field cameras in the depth of field camera module can be coordinated to adjust the depth of field of the color image acquired by the color camera module, and the depth of field accuracy of the color image can be effectively improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
fig. 1 is a flowchart of a 3D shooting method provided by an embodiment of the present disclosure;
fig. 2A, fig. 2B, and fig. 2C are flow charts of another 3D shooting method provided by the embodiment of the disclosure, respectively;
fig. 3 is a flowchart of another 3D photographing method provided by an embodiment of the present disclosure;
fig. 4 is a structural diagram of a 3D photographing apparatus provided in an embodiment of the present disclosure;
fig. 5 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 6 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 7 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 8 is a structural diagram of still another 3D photographing apparatus provided by an embodiment of the present disclosure;
fig. 9 is a device structure diagram of a 3D display terminal provided in an embodiment of the present disclosure.
Reference numerals:
300: a 3D camera; 310: a processor; 320: a memory; 330: a communication interface; 340: a bus; 410: a depth-of-field camera module; 411: a first depth of field camera; 412: a second depth of field camera; 420: a color camera module; 421: a first color camera; 422: a second color camera; 430: an image processor; 500: and 3D display terminal.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
Referring to fig. 1, the present disclosure provides a 3D shooting method, which is applicable to a depth-of-field camera module including at least two depth-of-field cameras, and a color camera module including at least two color cameras, and the method includes:
step 110: coordinating at least two depth-of-field cameras in the depth-of-field camera module to acquire first depth-of-field information of a shot object;
step 120: and acquiring a color image of the shot object which can be adjusted according to the first depth of field information through at least two color cameras in the color camera module.
In some embodiments, the 3D photographing method may further include: and adjusting second depth information in the color image according to the first depth information.
In some embodiments, the adjusted color image may also be displayed in 3D. The feasible 3D display modes are various and are not described herein any more, as long as 3D display can be smoothly achieved on the color image after the depth of field adjustment.
In some embodiments, adjusting the second depth information in the color image according to the first depth information may include:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information to make the depth of field of the corresponding pixel included in the second depth of field information approach the depth of field of the pixel included in the first depth of field information, so as to reduce the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information.
In comparison, the color images obtained by the at least two color cameras have high resolution and low depth of field accuracy, and the first depth of field information (which can be presented in the form of depth of field images) obtained by the depth of field cameras has low resolution and high depth of field accuracy. Therefore, the depth of field of the corresponding pixel included in the second depth of field information can be adjusted with the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information can be brought closer to the depth of field of the pixel included in the first depth of field information, a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information can be reduced, and accuracy of the depth of field of the corresponding pixel included in the second depth of field information can be effectively improved.
In some embodiments, the sizes of the depth image and the color image may be unified before adjusting the depth of field of the corresponding pixel included in the second depth information with reference to the depth of field of the pixel included in the first depth information (depth image); then, feature value grabbing and matching are carried out on the depth of field image and the color image based on a field of view (FOV) between the depth of field camera and the color camera, so that pixels in the depth of field image correspond to corresponding pixels in the color image in pixel units; thus, the depth of field of the pixel in the depth image can be compared with the depth of field of the corresponding pixel in the color image, and the depth of field can be adjusted according to the comparison result.
In some embodiments, adjusting the depth of field of the corresponding pixel included in the second depth of field information with reference to the depth of field of the pixel included in the first depth of field information may include:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the depth of field of the corresponding pixel included in the second depth information may be adjusted to the depth of field of the pixel included in the first depth information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth information and the depth of field of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80% and other numerical values of 5cm according to an actual situation or an operation manner such as a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, the depth of field of the corresponding pixel included in the second depth of field information may also be directly adjusted to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information can be directly adjusted to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation mode such as a preset strategy.
When the depth of field is adjusted, because the resolution of the first depth of field information acquired by the depth of field camera is low, all pixels in the depth of field image may only correspond to some pixels in the color synthesized image, and the depth of field of some or all pixels other than the corresponding pixels included in the second depth of field information may not be effectively adjusted. In this case, in some embodiments, the 3D photographing method may further include: and adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference so as to effectively adjust the depth of field of the pixels except the corresponding pixels included in the second depth of field information and effectively improve the accuracy of the depth of field.
In some embodiments, adjusting the depth of field of the pixels other than the corresponding pixel included in the second depth of field information with the depth of field of the pixels included in the first depth of field information as a reference may include:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the preset area may be set according to an actual situation or an operation manner such as a preset policy. Alternatively, the preset area may include a single corresponding pixel in the second depth information and non-corresponding pixels around the single corresponding pixel (i.e., pixels in the second depth information that do not correspond to the pixel in the first depth information), such as: the preset area may be a circular area formed by taking the single corresponding pixel as a center and taking other values such as half of the distance between the single corresponding pixel and another adjacent single corresponding pixel as radii. Optionally, there may be no overlap between different preset regions to avoid possible pixel adjustment conflicts.
Optionally, the preset area may also include at least two corresponding pixels in the second depth information and non-corresponding pixels around the two corresponding pixels, for example: the preset area may be a circular area formed by taking a larger value, such as a half of the distance between the two corresponding pixels, as a radius, and taking a center point of the two corresponding pixels as a center of a circle when the depth of field adjustment amounts of the at least two corresponding pixels are the same. Alternatively, there may be an overlap between different preset regions, as long as possible pixel adjustment conflicts can be avoided.
Optionally, the size and shape of the preset area may also be different according to the actual situation or the operation manner such as the preset policy, for example: the size of the preset area can be enlarged or reduced in proportion, and the shape of the preset area can be an ellipse, a polygon and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80% and other numerical values of 5cm according to an actual situation or an operation manner such as a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, when the depth of field adjustment is performed in the preset area, the depth of field of the corresponding pixel included in the second depth of field information may also be directly adjusted to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information can be directly adjusted to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation mode such as a preset strategy.
Referring to fig. 2A, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 201: selecting one depth of field camera in the depth of field camera module to acquire depth of field information of a shot object;
step 202: and taking the acquired depth information of the shooting object as first depth information.
Referring to fig. 2B, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 211: selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object;
step 212: the depth of field information of a shooting object acquired by one of the at least two depth of field cameras is selected as first depth of field information.
Referring to fig. 2C, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 221: selecting all depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object;
step 222: the depth of field information of the photographic subject acquired by one of all the depth of field cameras is selected as first depth of field information.
In some embodiments, selecting one of the at least two depth of view cameras may include: selecting one depth of field camera in the best working state from at least two depth of field cameras; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, selecting one of all depth of view cameras comprises: selecting one depth of field camera in the best working state from all the depth of field cameras; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, whether selecting between two depth of field cameras or three or more depth of field cameras, the optimal depth of field camera may be selected based on the operating state, accuracy, etc. of the depth of field camera. Optionally, the working state of the depth-of-field camera may include a working temperature, a working load, and the like of the depth-of-field camera; the accuracy of the depth of view camera may include a factory set accuracy of the depth of view camera, or a difference between an actual accuracy and a factory set accuracy (the smaller the difference, the higher the accuracy of the depth of view camera is represented), or the like.
In some embodiments, at least one depth Of field camera in the depth Of field camera module may be a structured light camera or a Time Of Flight (TOF) camera, and may be capable Of acquiring first depth Of field information Of a photographic subject including a depth Of field Of pixels. Alternatively, the acquired first depth information may be presented in the form of a depth image.
Referring to fig. 3, in some embodiments, acquiring color images of a photographic subject by at least two color cameras may include:
step 231: acquiring a first color image through a first color camera and acquiring a second color image through a second color camera;
step 232: and synthesizing the first color image and the second color image into a color synthetic image containing second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
In some embodiments, the first color camera and the second color camera may be the same color camera. Alternatively, the first color camera and the second color camera may be different color cameras. In this case, in order to smoothly synthesize the color composite image, the first color image and the second color image may be subjected to processing such as alignment and correction.
In some embodiments, a color composite image of the photographic subject may also be acquired by at least two color cameras in other possible ways than that shown in fig. 3. Alternatively, the color composite image may be acquired based on parameters other than the pitch and the shooting angle. Optionally, more than two color cameras may be used to obtain the color composite image, for example: three or more color cameras as long as a color composite image can be successfully synthesized.
In some embodiments, the color composite image may include a left image half, a right image half; the left half image may be a color image, and the right half image may be a depth image.
The disclosed embodiment provides a 3D camera 300 comprising a processor and a memory storing program instructions; the processor is configured to, upon execution of the program instructions, perform the 3D photographing method described above.
In some embodiments, the 3D camera 300 as shown in fig. 4 includes:
a processor (processor)310 and a memory (memory)320, and may further include a Communication Interface 330 and a bus 340. The processor 310, the communication interface 330 and the memory 320 may communicate with each other through a bus 340. Communication interface 330 may be used for information transfer. The processor 310 may call logic instructions in the memory 320 to perform the 3D photographing method of the above-described embodiment.
In addition, the logic instructions in the memory 320 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 320 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 310 executes functional applications and data processing, i.e., implements the 3D photographing method in the above-described method embodiments, by executing program instructions/modules stored in the memory 320.
The memory 320 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, memory 320 may include high speed random access memory and may also include non-volatile memory.
Referring to fig. 5, an embodiment of the present disclosure provides a 3D photographing apparatus 300 including:
the depth of field camera module 410 comprises at least two depth of field cameras and is configured to acquire first depth of field information of a shot object by coordinating the at least two depth of field cameras;
a color camera module 420 including at least two color cameras configured to acquire a color image of a photographic subject that is adjustable according to the first depth information;
the at least two color cameras can adopt optical lenses and sensor chips with the same performance index.
In some embodiments, the depth camera module 410 may communicate with the color camera module 420 to transceive captured or processed images.
Referring to fig. 6, in some embodiments, the 3D camera 300 may further include an image processor 430 configured to adjust the second depth information in the color image according to the first depth information.
In some embodiments, the image processor 430 may be further configured to: and 3D displaying the adjusted color image. The feasible 3D display modes are various and are not described herein again, as long as the image processor 430 can smoothly implement 3D display on the color image after the depth of field adjustment.
In some embodiments, the image processor 430 may be configured to:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information to make the depth of field of the corresponding pixel included in the second depth of field information approach the depth of field of the pixel included in the first depth of field information, so as to reduce the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information.
In comparison, the color images obtained by the at least two color cameras have high resolution and low depth of field accuracy, and the first depth of field information (which can be presented in the form of depth of field images) obtained by the depth of field cameras has low resolution and high depth of field accuracy. Therefore, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information, so that the depth of field of the corresponding pixel included in the second depth of field information may be closer to the depth of field of the pixel included in the first depth of field information, so as to reduce a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information, and effectively improve accuracy of the depth of field of the corresponding pixel included in the second depth of field information.
In some embodiments, the image processor 430 may unify the sizes of the depth image and the color image before adjusting the depth of field of the corresponding pixel included in the second depth information based on the depth of field of the pixel included in the first depth information (depth image); then, performing characteristic value grabbing and matching on the depth of field image and the color image based on the FOV between the depth of field camera and the color camera so as to correspond the pixels in the depth of field image to the corresponding pixels in the color image by taking the pixels as units; thus, the depth of field of the pixel in the depth image can be compared with the depth of field of the corresponding pixel in the color image, and the depth of field can be adjusted according to the comparison result.
In some embodiments, the image processor 430 may be configured to:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth information to the depth of field of the pixel included in the first depth information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth information and the depth of field of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80%, and other numerical values of 5cm according to an operation manner such as an actual situation or a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, the image processor 430 may also directly adjust the depth of the corresponding pixel included in the second depth information to the depth of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation manner such as a preset policy.
When the depth of field is adjusted, because the resolution of the first depth of field information acquired by the depth of field camera is low, all pixels in the depth of field image may only correspond to part of pixels in the color image, and the depth of field of part or all pixels other than the corresponding pixels included in the second depth of field information may not be effectively adjusted. In this case, in some embodiments, the image processor 430 may be further configured to: and adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference so as to effectively adjust the depth of field of the pixels except the corresponding pixels included in the second depth of field information and effectively improve the accuracy of the depth of field.
In some embodiments, the image processor 430 may be configured to:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the preset area may be set according to an actual situation or an operation manner such as a preset policy. Alternatively, the preset area may include a single corresponding pixel in the second depth information and non-corresponding pixels around the single corresponding pixel (i.e., pixels in the second depth information that do not correspond to the pixel in the first depth information), such as: the preset area may be a circular area formed by taking the single corresponding pixel as a center and taking other values such as half of the distance between the single corresponding pixel and another adjacent single corresponding pixel as radii. Optionally, there may be no overlap between different preset regions to avoid possible pixel adjustment conflicts.
Optionally, the preset area may also include at least two corresponding pixels in the second depth information and non-corresponding pixels around the two corresponding pixels, for example: the preset area may be a circular area formed by taking a larger value, such as a half of the distance between the two corresponding pixels, as a radius, and taking a center point of the two corresponding pixels as a center of a circle when the depth of field adjustment amounts of the at least two corresponding pixels are the same. Alternatively, there may be an overlap between different preset regions, as long as possible pixel adjustment conflicts can be avoided.
Optionally, the size and shape of the preset area may also be different according to the actual situation or the operation manner such as the preset policy, for example: the size of the preset area can be enlarged or reduced in proportion, and the shape of the preset area can be an ellipse, a polygon and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80%, and other numerical values of 5cm according to an operation manner such as an actual situation or a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the image processor 430 may also directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation manner such as a preset policy.
In some embodiments, the depth of view camera module 410 may be configured to:
selecting one depth of field camera in the depth of field camera module 410 to acquire depth of field information of a shot object, and taking the acquired depth of field information of the shot object as first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module 410 to respectively acquire depth-of-field information of a shot object, and selecting depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as first depth-of-field information; or
All the depth-of-field cameras in the depth-of-field camera module 410 are selected to respectively acquire depth-of-field information of a shooting object, and the depth-of-field information of the shooting object acquired by one of the depth-of-field cameras is selected as first depth-of-field information.
In some embodiments, the depth of view camera module 410 may be configured to:
under the condition that one depth of field camera of the at least two depth of field cameras is selected, one depth of field camera in the best working state of the at least two depth of field cameras is selected; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information;
or the like, or, alternatively,
under the condition that one depth of field camera in all depth of field cameras is selected, one depth of field camera in the best working state in all depth of field cameras is selected; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
Referring to fig. 7, in some embodiments, the depth of view camera module 410 may include:
a first depth camera 411 configured to acquire depth information of a photographic subject;
and a second depth camera 412 configured to acquire depth information of the photographic subject.
In some embodiments, the first depth of view camera 411 and the second depth of view camera 412 may be the same depth of view camera. Alternatively, the first depth of field camera 411 and the second depth of field camera 412 may be different depth of field cameras.
In some embodiments, the depth of view camera module 410 may also include more than two depth of view cameras.
In some embodiments, in addition to the depth of view camera, the depth of view camera module 410 may further include a controller capable of controlling the depth of view camera to effectively control the operation of the depth of view camera.
In some embodiments, whether selecting between two depth of field cameras or three or more depth of field cameras, the optimal depth of field camera may be selected based on the operating state, accuracy, etc. of the depth of field camera. Optionally, the working state of the depth-of-field camera may include a working temperature, a working load, and the like of the depth-of-field camera; the accuracy of the depth of view camera may include a factory set accuracy of the depth of view camera, or a difference between an actual accuracy and a factory set accuracy (the smaller the difference, the higher the accuracy of the depth of view camera is represented), or the like.
In some embodiments, at least one depth of field camera in the depth of field camera module 410 may be a structured light camera or a TOF camera, and may be capable of acquiring first depth of field information of a photographic subject including a depth of field of pixels. Alternatively, the acquired first depth information may be presented in the form of a depth image.
In some embodiments, at least one depth of view camera in the depth of view camera module 410 may be a TOF camera, which may be located between two color cameras in the color camera module 420, or at other locations around the color cameras. Optionally, the depth-of-field cameras in the depth-of-field camera module 410 may also be arranged in alignment with the same number of color cameras in the color camera module 420; for example: the two depth of view cameras in the depth of view camera module 410 may be aligned with the two color cameras in the color camera module 420.
Referring to fig. 8, in some embodiments, the color camera module 420 may include:
a first color camera 421 configured to acquire a first color image;
a second color camera 422 configured to acquire a second color image;
optionally, the image processor 430 may be configured to:
and synthesizing the first color image and the second color image into a color synthesized image containing second depth information according to the distance between the first color camera 421 and the second color camera 422 and the shooting angle.
In some embodiments, the first color camera 421 and the second color camera 422 may be the same color camera. Alternatively, the first color camera 421 and the second color camera 422 may be different color cameras. In this case, in order to smoothly synthesize the color composite image, the first color image and the second color image may be subjected to processing such as alignment and correction.
In some embodiments, the color camera module 420 may also obtain a color composite image of the photographic subject through at least two color cameras in other possible ways than shown in fig. 6. Alternatively, the color camera module 420 may acquire the color composite image based on parameters other than the pitch and the photographing angle. Optionally, more than two color cameras may be used when the color camera module 420 obtains the color composite image, for example: three or more color cameras as long as a color composite image can be successfully synthesized.
In some embodiments, the color camera module 420 may further include a controller capable of controlling the color cameras, in addition to the color cameras, to effectively control the operation of the color cameras and to smoothly realize the composition of the color composite image.
In some embodiments, the image processor 430 may be a 3D image processor based on a high speed computing chip such as a CPU, Field Programmable Gate Array (FPGA), or Application Specific Integrated Circuit (ASIC). Alternatively, the 3D image processor may be presented in the form of a chip, a single chip, or the like.
Referring to fig. 9, the present disclosure provides a 3D display terminal 500, which includes the 3D photographing device 300 composed of the depth-of-field camera module 410 and the color camera module 420. Optionally, the 3D display terminal 500 may further include an image processor 430.
In some embodiments, the 3D display terminal 500 may further include means for supporting the normal operation of the 3D display terminal 500, such as: at least one of light guide plate, polarizer, glass substrate, liquid crystal layer, and filter.
In some embodiments, the 3D display terminal 500 may be provided in a 3D display. Optionally, the 3D display may further comprise means for supporting the normal functioning of the 3D display, such as: at least one of the components of the backlight module, the main board, the back board and the like.
The 3D shooting method, the device and the 3D display terminal provided by the embodiments of the present disclosure can coordinate at least two depth-of-field cameras in the depth-of-field camera module to perform depth-of-field adjustment on the color image acquired by the color camera module, and can effectively improve the depth-of-field accuracy of the color image.
The disclosed embodiments also provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-mentioned 3D photographing method.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the above-mentioned 3D photographing method.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The computer-readable storage medium and the computer program product provided by the embodiments of the present disclosure can coordinate at least two depth-of-field cameras in the depth-of-field camera module to perform depth-of-field adjustment on a color image acquired by the color camera module, and can effectively improve the depth-of-field accuracy of the color image.
In some embodiments, the 3D techniques described above may include naked-eye 3D techniques, i.e.: the 3D shooting device and the 3D display terminal can realize functions related to naked eye 3D, such as: shooting and displaying of naked eye 3D images and the like.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes one or more instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of the disclosed embodiments includes the full ambit of the claims, as well as all available equivalents of the claims. As used in this application, although the terms "first," "second," etc. may be used in this application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, unless the meaning of the description changes, so long as all occurrences of the "first element" are renamed consistently and all occurrences of the "second element" are renamed consistently. The first and second elements are both elements, but may not be the same element. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one" does not exclude the presence of other like elements in a process, method or device that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be merely a division of a logical function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (7)

1. A3D shooting device, comprising:
the depth of field camera module comprises at least two depth of field cameras and is configured to acquire first depth of field information of a shot object by coordinating the at least two depth of field cameras;
and the color camera module comprises at least two color cameras and is configured to acquire a color image of the shooting object which can be adjusted according to the first depth information.
2. The 3D camera of claim 1, wherein the depth of field camera module comprises:
one of the at least two depth-of-field cameras in the best working state; or, the depth-of-field camera with the highest accuracy for acquiring the depth-of-field information in the at least two depth-of-field cameras;
or the like, or, alternatively,
one of the depth of field cameras in the best working state; or, the one of the depth-of-field cameras which acquires the depth-of-field information with the highest accuracy is selected.
3. The 3D camera of claim 2, wherein at least one of the depth of field camera modules is a structured light camera or a time of flight (TOF) camera.
4. The 3D camera of claim 3, wherein at least one of the depth of field camera modules is a TOF camera located between two of the color cameras in the color camera module.
5. The 3D camera according to any one of claims 1 to 4,
the color camera module comprises:
a first color camera configured to acquire a first color image;
a second color camera configured to acquire a second color image;
the image processor configured to:
and synthesizing the first color image and the second color image into a color synthesized image containing the second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
6. The 3D camera device of claim 1, wherein at least two color cameras in the color camera module employ optical lenses and sensor chips with the same performance index.
7. A3D display terminal characterized by comprising the 3D camera according to any one of claims 1 to 6.
CN202020135250.7U 2020-01-20 2020-01-20 3D shooting device and 3D display terminal Active CN212628181U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202020135250.7U CN212628181U (en) 2020-01-20 2020-01-20 3D shooting device and 3D display terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202020135250.7U CN212628181U (en) 2020-01-20 2020-01-20 3D shooting device and 3D display terminal

Publications (1)

Publication Number Publication Date
CN212628181U true CN212628181U (en) 2021-02-26

Family

ID=74728420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202020135250.7U Active CN212628181U (en) 2020-01-20 2020-01-20 3D shooting device and 3D display terminal

Country Status (1)

Country Link
CN (1) CN212628181U (en)

Similar Documents

Publication Publication Date Title
WO2017107700A1 (en) Image registration method and terminal
EP3525447B1 (en) Photographing method for terminal, and terminal
KR101991754B1 (en) Image processing method and apparatus, and electronic device
CN106664357B (en) Imaging device, imaging-display device and its control method
CN109903321A (en) Image processing method, image processing apparatus and storage medium
US20220343520A1 (en) Image Processing Method and Image Processing Apparatus, and Electronic Device Using Same
CN110233970A (en) Image processing method and device, electronic equipment, computer readable storage medium
US20220368877A1 (en) Image processing method, image processing apparatus, storage medium, manufacturing method of learned model, and image processing system
KR101714213B1 (en) Apparatus for revising image distortion of lens
CN108600644B (en) Photographing method and device and wearable device
CN112470192A (en) Dual-camera calibration method, electronic device and computer-readable storage medium
US11393076B2 (en) Blurring panoramic image blurring method, terminal and computer readable storage medium
CN212628181U (en) 3D shooting device and 3D display terminal
CN113141496A (en) 3D shooting method and device and 3D display terminal
CN107203961A (en) A kind of method and electronic equipment of migration of expressing one's feelings
CN112584121A (en) 3D shooting method and device and 3D display terminal
CN109788199B (en) Focusing method suitable for terminal with double cameras
CN104754316A (en) 3D imaging method and device and imaging system
CN114979614A (en) Display mode determining method and display mode determining device
CN112584128A (en) Method and device for realizing 3D display and 3D display terminal
CN109379521A (en) Camera calibration method, device, computer equipment and storage medium
CN110717944A (en) Application of camera calibration in smart phone, camera and digital camera
CN113938578A (en) Image blurring method, storage medium and terminal device
CN112104796A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109842740A (en) Panoramic camera, image processing system and image processing method

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220812

Address after: 100055 1-1808c, 15th floor, building 1, 168 Guang'anmenwai street, Xicheng District, Beijing

Patentee after: Beijing Xinhai vision 3D Technology Co.,Ltd.

Address before: 1-1808c, 15 / F, building 1, 168 Guang'anmenwai street, Xicheng District, Beijing 100054

Patentee before: Beijing Xinhai vision 3D Technology Co.,Ltd.

Patentee before: Vision technology venture capital Pte. Ltd.

Patentee before: Diao Honghao

TR01 Transfer of patent right