WO2019105260A1 - Procédé, appareil et dispositif d'obtention de profondeur de champ - Google Patents

Procédé, appareil et dispositif d'obtention de profondeur de champ Download PDF

Info

Publication number
WO2019105260A1
WO2019105260A1 PCT/CN2018/116474 CN2018116474W WO2019105260A1 WO 2019105260 A1 WO2019105260 A1 WO 2019105260A1 CN 2018116474 W CN2018116474 W CN 2018116474W WO 2019105260 A1 WO2019105260 A1 WO 2019105260A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
sub
candidate
main
Prior art date
Application number
PCT/CN2018/116474
Other languages
English (en)
Chinese (zh)
Inventor
欧阳丹
谭国辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019105260A1 publication Critical patent/WO2019105260A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a method, device, and device for acquiring depth of field.
  • terminal devices such as smart phones widely use a dual camera system to calculate the depth of field by two images acquired simultaneously by two cameras, for example, by the difference in position of pixels in the same position in the two images in the image. , calculate the depth of field information of the captured scene.
  • the depth of field information is directly calculated according to the two images simultaneously captured by the two cameras.
  • the difference between the two images of the calculated depth of field is large, the two images are in the same position in the scene for shooting. Pixels are less, etc., resulting in low depth of field calculation accuracy.
  • the present application provides a method, a device and a device for acquiring a depth of field, so as to solve the technical problem that the two images of the image depth information are calculated to be inaccurate due to a large gap in the prior art.
  • the embodiment of the present application provides a depth of field acquisition method, including: acquiring a multi-frame main image captured by a main camera and a multi-frame sub-image captured by a sub-camera, and obtaining the highest resolution according to the definition of the main image and each sub-picture of each frame.
  • a reference main image comparing the sharpness of the remaining main images other than the reference main image and the sharpness of each sub-image with the sharpness of the reference main image, detecting whether there is a preset screening threshold a candidate primary image and a candidate secondary image; if it is detected that at least one frame of the candidate primary image and at least one frame of the candidate secondary image are present, acquiring the reference primary image, each frame candidate primary image, and each frame candidate secondary image Image information, determining a first target main image and a first target sub image; and acquiring depth information according to the first target main image and the first target sub image.
  • a depth of field acquisition apparatus including: a first acquisition module, configured to acquire a multi-frame main image captured by a main camera and a multi-frame sub-image captured by a sub-camera, and a second acquisition module, configured to a frame main image and a sharpness of each sub-picture, obtaining a reference main image with the highest definition; a detecting module for setting the sharpness of the remaining main image except the reference main image and the sharpness of each sub-picture Comparing with the definition of the reference main image, detecting whether there is a candidate main image and a candidate sub-image satisfying a preset screening threshold; and a third obtaining module, configured to detect that at least one frame of the candidate main image exists and at least And acquiring the reference main image, the candidate main image of each frame, and the image information of each candidate sub-image, determining the first target main image and the first target sub-image; and the fourth acquiring module, Depth information is obtained according to the first target main image and the first target sub image.
  • a further embodiment of the present application provides an acquisition device, including a memory and a processor, wherein the memory stores an acquirer readable instruction, and when the instruction is executed by the processor, the processor executes the application.
  • the memory stores an acquirer readable instruction, and when the instruction is executed by the processor, the processor executes the application.
  • a further embodiment of the present application provides a non-transitory machine readable storage medium, on which an acquirer program is stored, and when the processor program is executed by the processor, the depth of field acquisition method according to the above embodiment of the present application is implemented.
  • FIG. 1 is a schematic diagram of a principle of triangulation according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a process of calculating a depth of field by a dual camera according to an embodiment of the present application
  • FIG. 3 is a flowchart of a depth of field acquisition method according to an embodiment of the present application.
  • FIG. 4(a) is a schematic diagram of a scene of a depth of field acquisition method according to an embodiment of the present application.
  • 4(b) is a schematic diagram of a scene of a depth of field acquisition method according to another embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a depth of field acquiring apparatus according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an image processing circuit in accordance with one embodiment of the present application.
  • the method, device and device for acquiring the depth of field according to the embodiment of the present application are described below with reference to the accompanying drawings.
  • the depth of field acquisition method in the embodiment of the present application is applicable to a hardware device having a dual camera, such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like, and the wearable device may be a smart bracelet, a smart watch, smart glasses, or the like.
  • the dual camera system calculates the depth of field through the main image and the sub image.
  • the principle of acquiring the depth of field by the dual camera will be described below with reference to the accompanying drawings:
  • the human eye distinguishes the depth of field mainly by relying on binocular vision to distinguish the depth of field.
  • This is the same as the principle of dual camera resolution depth of field, mainly based on the principle of triangular ranging as shown in Figure 1, based on Figure 1,
  • the imaged object is drawn, as well as the positions of the two cameras O R and O T , and the focal planes of the two cameras.
  • the focal plane is at a distance f from the plane of the two cameras, and two at the focal plane.
  • the camera performs imaging to obtain two captured images.
  • P and P' are the positions of the same object in different captured images, respectively.
  • the distance from the P point to the left boundary of the captured image is X R
  • the distance from the P′ point to the left boundary of the captured image is X T .
  • O R and O T are two cameras respectively, and the two cameras are on the same plane with a distance B.
  • the distance Z between the object in Figure 1 and the plane of the two cameras has the following relationship:
  • d is the difference in distance between the positions of the same object in different captured images. Since B and f are constant values, the distance Z of the object can be determined according to d.
  • a map of different point differences is calculated by the main image acquired by the main camera and the sub-image obtained by the sub-camera, which is represented by a disparity map, which is the same on the two graphs.
  • FIG. 3 is a flowchart of a method for acquiring a depth of field according to an embodiment of the present application. As shown in FIG. 3, the method includes:
  • Step 101 Acquire a multi-frame main image captured by the main camera and a multi-frame sub-image captured by the sub-camera.
  • Step 102 Acquire a reference main image with the highest definition according to the definition of the main image and the sub-image of each frame.
  • the sharpness of the image refers to the degree of clarity of the edge of the image, including the distinction between the lines of the image, that is, the resolution of the image point or the fineness of the texture of the subtle layer.
  • the higher the resolution the material of the scene.
  • the resolution of the point or the fineness of the texture of the subtle layer is higher.
  • the finer the performance of the material point the higher the definition.
  • the definition also includes whether the contour of the edge of the line is clear, that is, the degree of the virtual boundary of the contour of the image, the common sharpness. It means that the essence is the variation width of the gradient density of the hierarchical boundary. If the variation width is small, the boundary is clear. Conversely, if the variation width is large, the boundary is faint, and the definition also includes the degree of clarity between the small levels, especially the contrast between the small layers. Or is the slight contrast clear?
  • the multi-frame main image captured by the main camera and the multi-frame sub-image captured by the sub-camera are acquired, and the definition of the main image and the sub-image of each frame is calculated, and the reference master with the highest definition is obtained.
  • the image is used to make the reference main image as a reference, and the image with high definition is screened out as an image for further calculating the depth of field.
  • Step 103 Compare the sharpness of the remaining main images except the reference main image and the sharpness of each sub-image with the sharpness of the reference main image, and detect whether there are candidate main images and candidate pairs that satisfy the preset screening threshold. image.
  • the preset screening threshold is used to filter the main image and the sub-image with higher definition sharpness than the reference main image. For example, if the preset screening threshold is 80%, the preset screening threshold may be used to filter out An image that meets the definition of the resolution of the main image by more than 80%. Specifically, comparing the sharpness of the remaining main image and the sharpness of the sub-image with the sharpness of the reference main image, and detecting whether there is a candidate main image satisfying the preset screening threshold based on the reference main image with higher definition And the candidate sub-image to determine whether there is a main image and a sub-image with higher definition.
  • the determination of the candidate main image and the candidate sub-image is performed based on the sharpness of the reference image, and the photographing ability of the terminal device in the current scene is considered, and the flexibility of filtering out the candidate main image and the candidate sub-image is improved.
  • Step 104 If it is detected that there is at least one frame candidate main image and at least one frame candidate sub-image, calculate image information of the reference main image, each frame candidate main image, and each frame candidate sub-image, and determine the first target main image and the first A target secondary image.
  • Step 105 Calculate depth information according to the first target main image and the first target sub image.
  • the candidate main image and the candidate sub-image calculate the depth of field, which will improve the efficiency and accuracy of depth of field calculation.
  • image information of the main image and the candidate sub-image of each frame wherein the image information includes, but is not limited to, image sharpness, brightness, AWB (Automatic White Balance), etc., which affect the depth of field calculation, and further Determining a first target main image and a first target sub image, for example, acquiring a first target main image and a first target sub image whose image difference satisfies a preset condition, according to the first target main image and the first target sub image Obtaining the depth of field information, thereby ensuring that the calculation of the depth of field information is more accurate, so that the final imaging effect is better.
  • image sharpness brightness
  • AWB Automatic White Balance
  • the preset condition is related to the specific information included in the image information and the camera hardware capability and the photographing environment of the terminal device.
  • the preset condition set under the scene in which the image information includes image sharpness and brightness may be a reference.
  • the image information difference between the main image, the candidate main image per frame, and the candidate sub-image per frame is 10%, and the preset conditions set under the scene in which the image information includes image sharpness may be a reference main image, a candidate main image per frame, and each The image information difference of the frame candidate sub-image is 15%.
  • the acquired image information is one type of information, for example, all of the image brightness.
  • the image information of the reference main image and the candidate main image of each frame is sequentially compared with the image information of each candidate sub-image, and the two images with the smallest difference of the image information are acquired as the first target main image.
  • Image and first sub image are sequentially compared with the image information of each candidate sub-image, and the two images with the smallest difference of the image information are acquired as the first target main image.
  • the image information is brightness information
  • the main camera and the sub-camera are simultaneously photographed, and a 4-frame main image and a 4-frame sub-image are acquired, wherein the main image is 4 frames in accordance with the shooting order.
  • the numbers are 11, 12, 13 and 14, respectively, and the number of sub-images of 4 frames is 21, 22, 23 and 24, respectively, wherein the reference main image with the highest definition is 11, and the candidate main images are 12 and 13, respectively.
  • Images, candidate secondary images are 22 and 24, respectively.
  • the image brightness of the reference main image and the candidate main image of each frame is sequentially compared with the image brightness of each candidate sub-image, and the two image images with the smallest image brightness difference are obtained as the first target main image 12 and the first target sub image. 22, whereby the depth of field information is calculated based on the first target main image 12 and the first target sub image 22, and the final imaging effect is better according to the depth information.
  • the acquired image information is a plurality of types of information including, for example, image brightness, image white balance value, and image sharpness.
  • a weighting factor corresponding to each type of information is obtained, and the weighting value corresponding to the weighting factor may be calibrated by the system, or may be calibrated by the user according to the needs of the scene, and the reference main image and each frame candidate may be referred to.
  • Each type of image information of the main image is sequentially compared with each type of image information of each candidate sub-image, and information difference of each type of image information between each two frames of images is obtained, according to information of various types of image information between each two frames of images.
  • the difference and the weighting factor corresponding to each type of information acquire the information difference corresponding to each two frames of images, and the two frames with the smallest information difference are the first target main image and the first target sub image.
  • the main camera and the sub-camera are simultaneously photographed, and 4 is acquired.
  • a frame main image and a 4-frame sub-image wherein the numbers of the main frames of the four frames in the order of shooting are 11, 12, 13, and 14, respectively, and the numbers of the sub-images of the four frames are 21, 22, 23, and 24, respectively,
  • the highest reference main image is 11, the candidate main images are 12 and 13, respectively, and the candidate sub-images are 22 and 24, respectively.
  • the image brightness, AWB, and SOF of the reference main image and each candidate candidate main image are sequentially compared with the image brightness of each candidate sub-image, respectively, and image brightness, AWB, and SOF of the reference main image 11 and the candidate sub-image 22 are acquired.
  • the information difference b5, the information difference b6 between the candidate main image 13 and the candidate sub-image 24, and further, the two frames obtained with the smallest information difference are the first target main image 12 and the first target sub-image 22, thereby, according to the first target
  • the main image 12 and the first target sub-image 22 are more accurate in calculating the depth of field information, and the final imaging effect is better according to the depth information.
  • the specific type of image information may be determined by one or more of the captured scene information and the shooting mode. For example, if the brightness of the scene information is poor, the multi-frame main image is captured. And the sub-picture quality is poor, and determining the first target main image and the first target sub-image for calculating the depth of field based on only one type of image information may not be highly reliable, and therefore, various types of image information need to be considered to determine the first target.
  • the main image and the first target sub-image for example, the brightness of the light in the scene information is good, and the quality of the multi-frame main image and the sub-image captured is high, and the first target main image for calculating the depth of field is determined only according to one type of image information.
  • the first target sub-image is highly reliable, and thus, in order to improve image processing efficiency, one type of image information may be considered to determine the first target main image and the first target sub-image.
  • the current shooting mode is night scene shooting
  • the requirement for brightness information is high, and the brightness of the light is poor, and the quality of the multi-frame main image and the sub-image taken is poor, and only the first of the calculated depth of field is determined according to one type of image information.
  • the target main image and the first target sub image may not be highly reliable. Therefore, it is necessary to consider various types of image information to determine the first target main image and the first target sub image, and, for example, in the highlight shooting mode, The problem that is easily caused is overexposure. Therefore, in order to improve image processing efficiency, one type of AWB information can be considered to determine the first target main image and the first target sub image.
  • the captured scene information is detected, and/or the shooting mode, and further, the image information type to be acquired is determined according to the scene information, and/or the shooting mode, for example, the scene information may be pre-stored. And/or, the correspondence between the shooting mode and the image information type, and further, after learning the current scene information, and/or the shooting mode, querying the corresponding relationship, and acquiring the corresponding image information type.
  • the implementation manner of determining the type of the image information by using the scene information or the shooting mode alone includes the implementation of determining the type of the image information by using the scene information and the shooting mode at the same time.
  • the reference main image with higher definition is Obtaining a frame image of the depth information, acquiring the reference main image and the image information of each sub image, and comparing, obtaining a second target sub image whose image information difference satisfies a preset condition, wherein the second target sub image is a reference main image The closest one sub-picture, and further, the depth information is obtained with the reference main image and the second target sub-image.
  • the depth of field acquisition method of the embodiment of the present application by taking a multi-frame main and sub-image, selects a set of pictures whose main image has good definition and high definition of the main and sub-images, and the image information is as close as possible, for calculating Depth of field and final imaging, which makes the depth of field calculation more accurate, while at the same time ensuring image clarity and better final imaging.
  • the depth of field acquisition method of the embodiment of the present application acquires a multi-frame main image captured by a main camera and a multi-frame sub-image captured by a sub-camera, and calculates the definition of the main image and each sub-picture of each frame to obtain the definition.
  • the highest reference main image comparing the sharpness of the remaining main images with the sharpness of each sub-image and the sharpness of the reference main image, detecting whether there is a candidate main image and a candidate sub-image satisfying the preset screening threshold, if the detection
  • Obtaining that at least one frame candidate main image and at least one frame candidate sub image exist acquiring image information of the reference main image, each frame candidate main image, and each frame candidate sub image, and determining the first target main image and the first target sub image, Further, depth information is acquired based on the first target main image and the first target sub image. Thereby, the quality and consistency between the images for acquiring the depth information are ensured, and the accuracy of the depth of field and the imaging effect are improved.
  • FIG. 5 is a schematic structural diagram of a depth of field acquisition device according to an embodiment of the present application.
  • the depth of field information acquisition device includes a first acquisition module. 100.
  • the first obtaining module 100 is configured to acquire a multi-frame main image captured by the main camera and a multi-frame sub-image captured by the sub-camera.
  • the second obtaining module 200 is configured to obtain a reference main image with the highest definition according to the definition of each frame main image and each sub-picture.
  • the detecting module 300 is configured to compare the sharpness of the remaining main images except the reference main image and the sharpness of each sub-image with the sharpness of the reference main image, and detect whether there is a candidate main image that satisfies the preset screening threshold. And candidate secondary images.
  • the third obtaining module 400 is configured to acquire image information of the reference main image, each frame candidate main image, and each frame candidate sub-image when detecting that at least one frame candidate main image and at least one frame candidate sub-image exist.
  • the determining module 500 is configured to determine the first target primary image and the first target secondary image.
  • the fourth obtaining module 600 is configured to obtain depth information according to the first target main image and the first target sub image.
  • the second obtaining module 200 is specifically configured to sequentially compare the image information of the reference main image and the candidate main image with each frame candidate The image information of the image is compared, and two frames of images having the smallest difference in image information are acquired as the first target main image and the first target sub image.
  • the fourth obtaining module 600 is configured to obtain depth information according to the first target main image and the first target sub image.
  • the third obtaining module 400 is further configured to: satisfy a difference between the image information in the multi-frame sub-image and the image information of the reference main image when detecting that the candidate main image or the candidate sub-image is not present.
  • the sub-picture of the preset condition is used as the second target sub-picture.
  • the fourth obtaining module 600 is further configured to obtain depth information according to the reference main image and the second target sub image.
  • each module in the above-mentioned depth of field acquisition device is for illustrative purposes only. In other embodiments, the depth of field acquisition device may be divided into different modules as needed to complete all or part of the functions of the depth of field acquisition device.
  • the depth of field acquisition device of the embodiment of the present application acquires a multi-frame main image captured by a main camera and a multi-frame sub-image captured by a sub-camera, and calculates the definition of the main image and each sub-picture of each frame to obtain the definition.
  • the highest reference main image comparing the sharpness of the remaining main images with the sharpness of each sub-image and the sharpness of the reference main image, detecting whether there is a candidate main image and a candidate sub-image satisfying the preset screening threshold, if the detection Obtaining that at least one frame candidate primary image and at least one frame candidate secondary image are present, then acquiring the reference primary image, each frame candidate primary image, and the image information of each frame candidate secondary image to determine the first target primary image and the first target secondary image, and further And acquiring depth information according to the first target main image and the first target sub image.
  • the present application further provides a computer device, wherein the computer device is any device including a memory including a storage computer program and a processor running the computer program, for example, a smart phone, a personal computer, or the like.
  • the above computer device includes an image processing circuit, and the image processing circuit may be implemented by hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing) pipeline.
  • Figure 6 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 6, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
  • the image processing circuit includes an ISP processor 640 and a control logic 650.
  • the image data captured by imaging device 610 is first processed by ISP processor 640, which analyzes the image data to capture image statistical information that can be used to determine and/or control one or more control parameters of imaging device 610.
  • the imaging device 610 (camera) may include a camera having one or more lenses 612 and an image sensor 614, wherein the imaging device 610 includes two sets of cameras for implementing the background blurring method of the present application, wherein, with continued reference to FIG.
  • Imaging device 610 can simultaneously capture scene images based on a primary camera and a secondary camera
  • image sensor 614 can include a color filter array (such as a Bayer filter), and image sensor 614 can acquire light intensity captured by each imaging pixel of image sensor 614 and Wavelength information and a set of raw image data that can be processed by ISP processor 640.
  • Sensor 620 can provide raw image data to ISP processor 640 based on sensor 620 interface type, wherein ISP processor 640 can be based on raw image data acquired by image sensor 614 in the main camera provided by sensor 620 and image sensor in the secondary camera
  • the original image data acquired by 614 calculates depth information and the like.
  • the sensor 620 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Standard Mobile Imaging Architecture
  • the ISP processor 640 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 640 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 640 can also receive pixel data from image memory 630. For example, raw pixel data is sent from the sensor 620 interface to image memory 630, which is then provided to ISP processor 640 for processing.
  • Image memory 630 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • ISP processor 640 can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to image memory 630 for additional processing before being displayed.
  • the ISP processor 640 receives the processed data from the image memory 630 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data can be output to display 670 for viewing by a user and/or further processed by a graphics engine or GPU (Graphics Processing Unit). Additionally, the output of ISP processor 640 can also be sent to image memory 630, and display 670 can read image data from image memory 630.
  • image memory 630 can be configured to implement one or more frame buffers.
  • ISP processor 640 can be sent to encoder/decoder 660 to encode/decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 670 device.
  • Encoder/decoder 660 can be implemented by a CPU or GPU or coprocessor.
  • the statistics determined by the ISP processor 640 can be sent to the control logic 650 unit.
  • the statistics may include image sensor 614 statistics such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 612 shading correction, and the like.
  • Control logic 650 can include a processor and/or a microcontroller that executes one or more routines (such as firmware) that can determine control parameters and control of imaging device 610 based on received statistical data. parameter.
  • the control parameters may include sensor 620 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 612 control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 612 shading correction parameters.
  • Depth of field information is acquired based on the first target main image and the first target sub image.
  • the present application also proposes a non-transitory computer readable storage medium that enables execution of the depth of field acquisition method as described in the above embodiments when instructions in the storage medium are executed by the processor.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé, un appareil et un dispositif d'obtention de profondeur de champ. Le procédé consiste à : obtenir de multiples trames d'image principales capturées par une caméra principale et de multiples trames d'image auxiliaires capturées par une caméra auxiliaire, et obtenir une image de référence principale ayant la netteté la plus élevée en fonction de la netteté de chaque trame d'image principale et de chaque trame d'image auxiliaire; comparer la netteté d'images principales autres que l'image de référence principale et la netteté de chaque trame d'image auxiliaire avec la netteté de l'image de référence principale pour détecter si des images principales candidates et des images auxiliaires candidates qui satisfont un seuil de filtrage prédéfini existent; si tel est le cas, obtenir des informations d'image de l'image de référence principale, de chaque trame d'image principale candidate, et de chaque trame d'image auxiliaire candidate pour déterminer une première image principale cible et une première image auxiliaire cible; et obtenir une profondeur d'informations de champ selon la première image principale cible et la première image auxiliaire cible. Par conséquent, la qualité et la cohérence d'images à partir desquelles la profondeur d'informations de champ est obtenue sont assurées, et la précision de la profondeur de champ et de l'effet d'imagerie sont améliorées.
PCT/CN2018/116474 2017-11-30 2018-11-20 Procédé, appareil et dispositif d'obtention de profondeur de champ WO2019105260A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711243742.7 2017-11-30
CN201711243742.7A CN108053438B (zh) 2017-11-30 2017-11-30 景深获取方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2019105260A1 true WO2019105260A1 (fr) 2019-06-06

Family

ID=62121752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116474 WO2019105260A1 (fr) 2017-11-30 2018-11-20 Procédé, appareil et dispositif d'obtention de profondeur de champ

Country Status (2)

Country Link
CN (1) CN108053438B (fr)
WO (1) WO2019105260A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936258A (zh) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053438B (zh) * 2017-11-30 2020-03-06 Oppo广东移动通信有限公司 景深获取方法、装置及设备
CN108900766A (zh) * 2018-06-15 2018-11-27 北京华捷艾米科技有限公司 一种全景图像自动增强装置和方法、以及应用该装置的全景相机
CN109754439B (zh) * 2019-01-17 2023-07-21 Oppo广东移动通信有限公司 标定方法、装置、电子设备及介质
CN110310515B (zh) * 2019-04-23 2020-11-03 绍兴越元科技有限公司 现场信息识别反馈系统
CN115829911A (zh) * 2022-07-22 2023-03-21 宁德时代新能源科技股份有限公司 检测系统的成像一致性的方法、装置和计算机存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763477A (zh) * 2014-02-21 2014-04-30 上海果壳电子有限公司 一种双摄像头拍后调焦成像装置和方法
US20160019681A1 (en) * 2014-07-17 2016-01-21 Asustek Computer Inc. Image processing method and electronic device using the same
CN106851124A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 基于景深的图像处理方法、处理装置和电子装置
CN108053438A (zh) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 景深获取方法、装置及设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE518050C2 (sv) * 2000-12-22 2002-08-20 Afsenius Sven Aake Kamera som kombinerar skarpt fokuserade delar från olika exponeringar till en slutbild
CN106550184B (zh) * 2015-09-18 2020-04-03 中兴通讯股份有限公司 照片处理方法及装置
CN105957053B (zh) * 2016-04-19 2019-01-01 深圳创维-Rgb电子有限公司 二维图像景深生成方法和装置
CN106954020B (zh) * 2017-02-28 2019-10-15 努比亚技术有限公司 一种图像处理方法及终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763477A (zh) * 2014-02-21 2014-04-30 上海果壳电子有限公司 一种双摄像头拍后调焦成像装置和方法
US20160019681A1 (en) * 2014-07-17 2016-01-21 Asustek Computer Inc. Image processing method and electronic device using the same
CN106851124A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 基于景深的图像处理方法、处理装置和电子装置
CN108053438A (zh) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 景深获取方法、装置及设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936258A (zh) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN108053438A (zh) 2018-05-18
CN108053438B (zh) 2020-03-06

Similar Documents

Publication Publication Date Title
KR102278776B1 (ko) 이미지 처리 방법, 기기, 및 장치
US10757312B2 (en) Method for image-processing and mobile terminal using dual cameras
US10878539B2 (en) Image-processing method, apparatus and device
WO2019105262A1 (fr) Procédé, appareil et dispositif de traitement de flou d'arrière-plan
WO2019105260A1 (fr) Procédé, appareil et dispositif d'obtention de profondeur de champ
KR102306283B1 (ko) 이미지 처리 방법 및 장치
WO2019109805A1 (fr) Procédé et dispositif de traitement d'image
EP3480784B1 (fr) Procédé et dispositif de traitement d'images
CN107509031B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
CN107481186B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
WO2019105261A1 (fr) Procédé et appareil de floutage d'arrière-plan et dispositif
WO2019105254A1 (fr) Procédé, appareil et dispositif de traitement de flou d'arrière-plan
JP6999802B2 (ja) ダブルカメラベースの撮像のための方法および装置
CN107704798B (zh) 图像虚化方法、装置、计算机可读存储介质和计算机设备
CN109685853B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN107872631B (zh) 基于双摄像头的图像拍摄方法、装置及移动终端
WO2019019890A1 (fr) Procédé de traitement d'image, équipement informatique et support de stockage lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18883333

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18883333

Country of ref document: EP

Kind code of ref document: A1