Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
Referring to fig. 1, the present disclosure provides a 3D shooting method, which is applicable to a depth-of-field camera module including at least two depth-of-field cameras, and a color camera module including at least two color cameras, and the method includes:
step 110: coordinating at least two depth-of-field cameras in the depth-of-field camera module to acquire first depth-of-field information of a shot object;
step 120: and acquiring a color image of the shot object which can be adjusted according to the first depth of field information through at least two color cameras in the color camera module.
In some embodiments, the 3D photographing method may further include: and adjusting second depth information in the color image according to the first depth information.
In some embodiments, the adjusted color image may also be displayed in 3D. The feasible 3D display modes are various and are not described herein any more, as long as 3D display can be smoothly achieved on the color image after the depth of field adjustment.
In some embodiments, adjusting the second depth information in the color image according to the first depth information may include:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information to make the depth of field of the corresponding pixel included in the second depth of field information approach the depth of field of the pixel included in the first depth of field information, so as to reduce the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information.
In comparison, the color images obtained by the at least two color cameras have high resolution and low depth of field accuracy, and the first depth of field information (which can be presented in the form of depth of field images) obtained by the depth of field cameras has low resolution and high depth of field accuracy. Therefore, the depth of field of the corresponding pixel included in the second depth of field information can be adjusted with the depth of field of the pixel included in the first depth of field information as a reference, so that the depth of field of the corresponding pixel included in the second depth of field information can be brought closer to the depth of field of the pixel included in the first depth of field information, a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information can be reduced, and accuracy of the depth of field of the corresponding pixel included in the second depth of field information can be effectively improved.
In some embodiments, the sizes of the depth image and the color image may be unified before adjusting the depth of field of the corresponding pixel included in the second depth information with reference to the depth of field of the pixel included in the first depth information (depth image); then, feature value grabbing and matching are carried out on the depth of field image and the color image based on a field of view (FOV) between the depth of field camera and the color camera, so that pixels in the depth of field image correspond to corresponding pixels in the color image in pixel units; thus, the depth of field of the pixel in the depth image can be compared with the depth of field of the corresponding pixel in the color image, and the depth of field can be adjusted according to the comparison result.
In some embodiments, adjusting the depth of field of the corresponding pixel included in the second depth of field information with reference to the depth of field of the pixel included in the first depth of field information may include:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the depth of field of the corresponding pixel included in the second depth information may be adjusted to the depth of field of the pixel included in the first depth information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth information and the depth of field of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80% and other numerical values of 5cm according to an actual situation or an operation manner such as a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, the depth of field of the corresponding pixel included in the second depth of field information may also be directly adjusted to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information can be directly adjusted to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation mode such as a preset strategy.
When the depth of field is adjusted, because the resolution of the first depth of field information acquired by the depth of field camera is low, all pixels in the depth of field image may only correspond to some pixels in the color synthesized image, and the depth of field of some or all pixels other than the corresponding pixels included in the second depth of field information may not be effectively adjusted. In this case, in some embodiments, the 3D photographing method may further include: and adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference so as to effectively adjust the depth of field of the pixels except the corresponding pixels included in the second depth of field information and effectively improve the accuracy of the depth of field.
In some embodiments, adjusting the depth of field of the pixels other than the corresponding pixel included in the second depth of field information with the depth of field of the pixels included in the first depth of field information as a reference may include:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the preset area may be set according to an actual situation or an operation manner such as a preset policy. Alternatively, the preset area may include a single corresponding pixel in the second depth information and non-corresponding pixels around the single corresponding pixel (i.e., pixels in the second depth information that do not correspond to the pixel in the first depth information), such as: the preset area may be a circular area formed by taking the single corresponding pixel as a center and taking other values such as half of the distance between the single corresponding pixel and another adjacent single corresponding pixel as radii. Optionally, there may be no overlap between different preset regions to avoid possible pixel adjustment conflicts.
Optionally, the preset area may also include at least two corresponding pixels in the second depth information and non-corresponding pixels around the two corresponding pixels, for example: the preset area may be a circular area formed by taking a larger value, such as a half of the distance between the two corresponding pixels, as a radius, and taking a center point of the two corresponding pixels as a center of a circle when the depth of field adjustment amounts of the at least two corresponding pixels are the same. Alternatively, there may be an overlap between different preset regions, as long as possible pixel adjustment conflicts can be avoided.
Optionally, the size and shape of the preset area may also be different according to the actual situation or the operation manner such as the preset policy, for example: the size of the preset area can be enlarged or reduced in proportion, and the shape of the preset area can be an ellipse, a polygon and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information may be adjusted to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80% and other numerical values of 5cm according to an actual situation or an operation manner such as a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, when the depth of field adjustment is performed in the preset area, the depth of field of the corresponding pixel included in the second depth of field information may also be directly adjusted to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the depth of field of the corresponding pixel included in the second depth of field information can be directly adjusted to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation mode such as a preset strategy.
Referring to fig. 2A, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 201: selecting one depth of field camera in the depth of field camera module to acquire depth of field information of a shot object;
step 202: and taking the acquired depth information of the shooting object as first depth information.
Referring to fig. 2B, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 211: selecting at least two depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object;
step 212: the depth of field information of a shooting object acquired by one of the at least two depth of field cameras is selected as first depth of field information.
Referring to fig. 2C, in some embodiments, coordinating at least two depth cameras in the depth camera module to acquire the first depth information may include:
step 221: selecting all depth-of-field cameras in the depth-of-field camera module to respectively acquire depth-of-field information of a shot object;
step 222: the depth of field information of the photographic subject acquired by one of all the depth of field cameras is selected as first depth of field information.
In some embodiments, selecting one of the at least two depth of view cameras may include: selecting one depth of field camera in the best working state from at least two depth of field cameras; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, selecting one of all depth of view cameras comprises: selecting one depth of field camera in the best working state from all the depth of field cameras; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
In some embodiments, whether selecting between two depth of field cameras or three or more depth of field cameras, the optimal depth of field camera may be selected based on the operating state, accuracy, etc. of the depth of field camera. Optionally, the working state of the depth-of-field camera may include a working temperature, a working load, and the like of the depth-of-field camera; the accuracy of the depth of view camera may include a factory set accuracy of the depth of view camera, or a difference between an actual accuracy and a factory set accuracy (the smaller the difference, the higher the accuracy of the depth of view camera is represented), or the like.
In some embodiments, at least one depth Of field camera in the depth Of field camera module may be a structured light camera or a Time Of Flight (TOF) camera, and may be capable Of acquiring first depth Of field information Of a photographic subject including a depth Of field Of pixels. Alternatively, the acquired first depth information may be presented in the form of a depth image.
Referring to fig. 3, in some embodiments, acquiring color images of a photographic subject by at least two color cameras may include:
step 231: acquiring a first color image through a first color camera and acquiring a second color image through a second color camera;
step 232: and synthesizing the first color image and the second color image into a color synthetic image containing second depth information according to the distance between the first color camera and the second color camera and the shooting angle.
In some embodiments, the first color camera and the second color camera may be the same color camera. Alternatively, the first color camera and the second color camera may be different color cameras. In this case, in order to smoothly synthesize the color composite image, the first color image and the second color image may be subjected to processing such as alignment and correction.
In some embodiments, a color composite image of the photographic subject may also be acquired by at least two color cameras in other possible ways than that shown in fig. 3. Alternatively, the color composite image may be acquired based on parameters other than the pitch and the shooting angle. Optionally, more than two color cameras may be used to obtain the color composite image, for example: three or more color cameras as long as a color composite image can be successfully synthesized.
In some embodiments, the color composite image may include a left image half, a right image half; the left half image may be a color image, and the right half image may be a depth image.
The disclosed embodiment provides a 3D camera 300 comprising a processor and a memory storing program instructions; the processor is configured to, upon execution of the program instructions, perform the 3D photographing method described above.
In some embodiments, the 3D camera 300 as shown in fig. 4 includes:
a processor (processor)310 and a memory (memory)320, and may further include a Communication Interface 330 and a bus 340. The processor 310, the communication interface 330 and the memory 320 may communicate with each other through a bus 340. Communication interface 330 may be used for information transfer. The processor 310 may call logic instructions in the memory 320 to perform the 3D photographing method of the above-described embodiment.
In addition, the logic instructions in the memory 320 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 320 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 310 executes functional applications and data processing, i.e., implements the 3D photographing method in the above-described method embodiments, by executing program instructions/modules stored in the memory 320.
The memory 320 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, memory 320 may include high speed random access memory and may also include non-volatile memory.
Referring to fig. 5, an embodiment of the present disclosure provides a 3D photographing apparatus 300 including:
the depth of field camera module 410 comprises at least two depth of field cameras and is configured to acquire first depth of field information of a shot object by coordinating the at least two depth of field cameras;
a color camera module 420 including at least two color cameras configured to acquire a color image of a photographic subject that is adjustable according to the first depth information;
the at least two color cameras can adopt optical lenses and sensor chips with the same performance index.
In some embodiments, the depth camera module 410 may communicate with the color camera module 420 to transceive captured or processed images.
Referring to fig. 6, in some embodiments, the 3D camera 300 may further include an image processor 430 configured to adjust the second depth information in the color image according to the first depth information.
In some embodiments, the image processor 430 may be further configured to: and 3D displaying the adjusted color image. The feasible 3D display modes are various and are not described herein again, as long as the image processor 430 can smoothly implement 3D display on the color image after the depth of field adjustment.
In some embodiments, the image processor 430 may be configured to:
and adjusting the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information to make the depth of field of the corresponding pixel included in the second depth of field information approach the depth of field of the pixel included in the first depth of field information, so as to reduce the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information.
In comparison, the color images obtained by the at least two color cameras have high resolution and low depth of field accuracy, and the first depth of field information (which can be presented in the form of depth of field images) obtained by the depth of field cameras has low resolution and high depth of field accuracy. Therefore, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information based on the depth of field of the pixel included in the first depth of field information, so that the depth of field of the corresponding pixel included in the second depth of field information may be closer to the depth of field of the pixel included in the first depth of field information, so as to reduce a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information, and effectively improve accuracy of the depth of field of the corresponding pixel included in the second depth of field information.
In some embodiments, the image processor 430 may unify the sizes of the depth image and the color image before adjusting the depth of field of the corresponding pixel included in the second depth information based on the depth of field of the pixel included in the first depth information (depth image); then, performing characteristic value grabbing and matching on the depth of field image and the color image based on the FOV between the depth of field camera and the color camera so as to correspond the pixels in the depth of field image to the corresponding pixels in the color image by taking the pixels as units; thus, the depth of field of the pixel in the depth image can be compared with the depth of field of the corresponding pixel in the color image, and the depth of field can be adjusted according to the comparison result.
In some embodiments, the image processor 430 may be configured to:
adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information in proportion; or the like, or, alternatively,
and adjusting the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information.
In some embodiments, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth information to the depth of field of the pixel included in the first depth information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth information and the depth of field of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80%, and other numerical values of 5cm according to an operation manner such as an actual situation or a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, the image processor 430 may also directly adjust the depth of the corresponding pixel included in the second depth information to the depth of the pixel included in the first depth information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation manner such as a preset policy.
When the depth of field is adjusted, because the resolution of the first depth of field information acquired by the depth of field camera is low, all pixels in the depth of field image may only correspond to part of pixels in the color image, and the depth of field of part or all pixels other than the corresponding pixels included in the second depth of field information may not be effectively adjusted. In this case, in some embodiments, the image processor 430 may be further configured to: and adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information by taking the depth of field of the pixels included in the first depth of field information as a reference so as to effectively adjust the depth of field of the pixels except the corresponding pixels included in the second depth of field information and effectively improve the accuracy of the depth of field.
In some embodiments, the image processor 430 may be configured to:
in the preset area, adjusting the depth of field of the pixels except the corresponding pixels included in the second depth of field information to the depth of field of the pixels included in the first depth of field information in proportion; or the like, or, alternatively,
in the preset area, the depth of field of the pixels other than the corresponding pixels included in the second depth of field information is adjusted to the depth of field of the pixels included in the first depth of field information.
In some embodiments, the preset area may be set according to an actual situation or an operation manner such as a preset policy. Alternatively, the preset area may include a single corresponding pixel in the second depth information and non-corresponding pixels around the single corresponding pixel (i.e., pixels in the second depth information that do not correspond to the pixel in the first depth information), such as: the preset area may be a circular area formed by taking the single corresponding pixel as a center and taking other values such as half of the distance between the single corresponding pixel and another adjacent single corresponding pixel as radii. Optionally, there may be no overlap between different preset regions to avoid possible pixel adjustment conflicts.
Optionally, the preset area may also include at least two corresponding pixels in the second depth information and non-corresponding pixels around the two corresponding pixels, for example: the preset area may be a circular area formed by taking a larger value, such as a half of the distance between the two corresponding pixels, as a radius, and taking a center point of the two corresponding pixels as a center of a circle when the depth of field adjustment amounts of the at least two corresponding pixels are the same. Alternatively, there may be an overlap between different preset regions, as long as possible pixel adjustment conflicts can be avoided.
Optionally, the size and shape of the preset area may also be different according to the actual situation or the operation manner such as the preset policy, for example: the size of the preset area can be enlarged or reduced in proportion, and the shape of the preset area can be an ellipse, a polygon and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by a proportion of a difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 10%, 20%, 30%, 50%, 80%, and other numerical values of 5cm according to an operation manner such as an actual situation or a preset policy, that is: adjusting the numerical values of 5mm, 1cm, 1.5cm, 2.5cm, 4cm and the like.
In some embodiments, when the depth of field adjustment is performed in the preset region, the image processor 430 may also directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information. For example: the difference between the depth of field of the corresponding pixel included in the second depth of field information and the depth of field of the pixel included in the first depth of field information is 5cm, and the image processor 430 may directly adjust the depth of field of the corresponding pixel included in the second depth of field information to the depth of field of the pixel included in the first depth of field information by 5cm according to an actual situation or an operation manner such as a preset policy.
In some embodiments, the depth of view camera module 410 may be configured to:
selecting one depth of field camera in the depth of field camera module 410 to acquire depth of field information of a shot object, and taking the acquired depth of field information of the shot object as first depth of field information; or
Selecting at least two depth-of-field cameras in the depth-of-field camera module 410 to respectively acquire depth-of-field information of a shot object, and selecting depth-of-field information of the shot object acquired by one of the at least two depth-of-field cameras as first depth-of-field information; or
All the depth-of-field cameras in the depth-of-field camera module 410 are selected to respectively acquire depth-of-field information of a shooting object, and the depth-of-field information of the shooting object acquired by one of the depth-of-field cameras is selected as first depth-of-field information.
In some embodiments, the depth of view camera module 410 may be configured to:
under the condition that one depth of field camera of the at least two depth of field cameras is selected, one depth of field camera in the best working state of the at least two depth of field cameras is selected; or selecting one of the at least two depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information;
or the like, or, alternatively,
under the condition that one depth of field camera in all depth of field cameras is selected, one depth of field camera in the best working state in all depth of field cameras is selected; or selecting one of all the depth-of-field cameras with the highest accuracy for acquiring the depth-of-field information.
Referring to fig. 7, in some embodiments, the depth of view camera module 410 may include:
a first depth camera 411 configured to acquire depth information of a photographic subject;
and a second depth camera 412 configured to acquire depth information of the photographic subject.
In some embodiments, the first depth of view camera 411 and the second depth of view camera 412 may be the same depth of view camera. Alternatively, the first depth of field camera 411 and the second depth of field camera 412 may be different depth of field cameras.
In some embodiments, the depth of view camera module 410 may also include more than two depth of view cameras.
In some embodiments, in addition to the depth of view camera, the depth of view camera module 410 may further include a controller capable of controlling the depth of view camera to effectively control the operation of the depth of view camera.
In some embodiments, whether selecting between two depth of field cameras or three or more depth of field cameras, the optimal depth of field camera may be selected based on the operating state, accuracy, etc. of the depth of field camera. Optionally, the working state of the depth-of-field camera may include a working temperature, a working load, and the like of the depth-of-field camera; the accuracy of the depth of view camera may include a factory set accuracy of the depth of view camera, or a difference between an actual accuracy and a factory set accuracy (the smaller the difference, the higher the accuracy of the depth of view camera is represented), or the like.
In some embodiments, at least one depth of field camera in the depth of field camera module 410 may be a structured light camera or a TOF camera, and may be capable of acquiring first depth of field information of a photographic subject including a depth of field of pixels. Alternatively, the acquired first depth information may be presented in the form of a depth image.
In some embodiments, at least one depth of view camera in the depth of view camera module 410 may be a TOF camera, which may be located between two color cameras in the color camera module 420, or at other locations around the color cameras. Optionally, the depth-of-field cameras in the depth-of-field camera module 410 may also be arranged in alignment with the same number of color cameras in the color camera module 420; for example: the two depth of view cameras in the depth of view camera module 410 may be aligned with the two color cameras in the color camera module 420.
Referring to fig. 8, in some embodiments, the color camera module 420 may include:
a first color camera 421 configured to acquire a first color image;
a second color camera 422 configured to acquire a second color image;
optionally, the image processor 430 may be configured to:
and synthesizing the first color image and the second color image into a color synthesized image containing second depth information according to the distance between the first color camera 421 and the second color camera 422 and the shooting angle.
In some embodiments, the first color camera 421 and the second color camera 422 may be the same color camera. Alternatively, the first color camera 421 and the second color camera 422 may be different color cameras. In this case, in order to smoothly synthesize the color composite image, the first color image and the second color image may be subjected to processing such as alignment and correction.
In some embodiments, the color camera module 420 may also obtain a color composite image of the photographic subject through at least two color cameras in other possible ways than shown in fig. 6. Alternatively, the color camera module 420 may acquire the color composite image based on parameters other than the pitch and the photographing angle. Optionally, more than two color cameras may be used when the color camera module 420 obtains the color composite image, for example: three or more color cameras as long as a color composite image can be successfully synthesized.
In some embodiments, the color camera module 420 may further include a controller capable of controlling the color cameras, in addition to the color cameras, to effectively control the operation of the color cameras and to smoothly realize the composition of the color composite image.
In some embodiments, the image processor 430 may be a 3D image processor based on a high speed computing chip such as a CPU, Field Programmable Gate Array (FPGA), or Application Specific Integrated Circuit (ASIC). Alternatively, the 3D image processor may be presented in the form of a chip, a single chip, or the like.
Referring to fig. 9, the present disclosure provides a 3D display terminal 500, which includes the 3D photographing device 300 composed of the depth-of-field camera module 410 and the color camera module 420. Optionally, the 3D display terminal 500 may further include an image processor 430.
In some embodiments, the 3D display terminal 500 may further include means for supporting the normal operation of the 3D display terminal 500, such as: at least one of light guide plate, polarizer, glass substrate, liquid crystal layer, and filter.
In some embodiments, the 3D display terminal 500 may be provided in a 3D display. Optionally, the 3D display may further comprise means for supporting the normal functioning of the 3D display, such as: at least one of the components of the backlight module, the main board, the back board and the like.
The 3D shooting method, the device and the 3D display terminal provided by the embodiments of the present disclosure can coordinate at least two depth-of-field cameras in the depth-of-field camera module to perform depth-of-field adjustment on the color image acquired by the color camera module, and can effectively improve the depth-of-field accuracy of the color image.
The disclosed embodiments also provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-mentioned 3D photographing method.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the above-mentioned 3D photographing method.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The computer-readable storage medium and the computer program product provided by the embodiments of the present disclosure can coordinate at least two depth-of-field cameras in the depth-of-field camera module to perform depth-of-field adjustment on a color image acquired by the color camera module, and can effectively improve the depth-of-field accuracy of the color image.
In some embodiments, the 3D techniques described above may include naked-eye 3D techniques, i.e.: the 3D shooting device and the 3D display terminal can realize functions related to naked eye 3D, such as: shooting and displaying of naked eye 3D images and the like.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes one or more instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of the disclosed embodiments includes the full ambit of the claims, as well as all available equivalents of the claims. As used in this application, although the terms "first," "second," etc. may be used in this application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, unless the meaning of the description changes, so long as all occurrences of the "first element" are renamed consistently and all occurrences of the "second element" are renamed consistently. The first and second elements are both elements, but may not be the same element. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one" does not exclude the presence of other like elements in a process, method or device that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be merely a division of a logical function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.