CN108053438A - Depth of field acquisition methods, device and equipment - Google Patents

Depth of field acquisition methods, device and equipment Download PDF

Info

Publication number
CN108053438A
CN108053438A CN201711243742.7A CN201711243742A CN108053438A CN 108053438 A CN108053438 A CN 108053438A CN 201711243742 A CN201711243742 A CN 201711243742A CN 108053438 A CN108053438 A CN 108053438A
Authority
CN
China
Prior art keywords
image
picture
master image
sub
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711243742.7A
Other languages
Chinese (zh)
Other versions
CN108053438B (en
Inventor
欧阳丹
谭国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711243742.7A priority Critical patent/CN108053438B/en
Publication of CN108053438A publication Critical patent/CN108053438A/en
Priority to PCT/CN2018/116474 priority patent/WO2019105260A1/en
Application granted granted Critical
Publication of CN108053438B publication Critical patent/CN108053438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Present applicant proposes a kind of depth of field acquisition methods, device and equipment, wherein, method includes:The multiframe master image of main camera shooting and the multiframe sub-picture of secondary camera shooting are obtained, according to every frame master image and the clarity per frame sub-picture, acquisition clarity is highest to refer to master image;By the clarity of remaining master image in addition to reference to master image and the clarity of every frame sub-picture compared with the clarity with reference to master image, detect whether to screen candidate's master image of threshold value and candidate's sub-picture in the presence of satisfaction is default;If in the presence of obtaining the image information with reference to master image, per frame candidate master image and per frame candidate's sub-picture, determine first object master image and first object sub-picture;Depth of view information is obtained according to first object master image and first object sub-picture.Thereby it is ensured that quality and uniformity between obtaining the image of depth of view information, the accurate rate and imaging effect of the depth of field are improved.

Description

Depth of field acquisition methods, device and equipment
Technical field
This application involves a kind of technical field of image processing more particularly to depth of field acquisition methods, device and equipment.
Background technology
At present, the terminal devices such as smart mobile phone are widely used for dual camera system, are obtained simultaneously by two cameras Two images calculate the depth of field, for example, passing through the pixel of same position in the scene for shooting in two images Position difference, calculate the depth of view information of the scene of shooting.
In correlation technique, the two images directly shot simultaneously according to two cameras carry out the calculating of depth of view information, When two image differences for calculating the depth of field are larger, then can cause in two images for same position in the scene of shooting Pixel is less etc., and it is low to calculate accuracy rate so as to cause the depth of field.
Apply for content
The application provides a kind of depth of field acquisition methods, device and equipment, to solve in the prior art, to calculate image depth letter Two images of breath cause the depth of field to calculate the technical issues of inaccurate since gap is larger.
The embodiment of the present application provides a kind of depth of field acquisition methods, including:Obtain the multiframe master image of main camera shooting with And the multiframe sub-picture of secondary camera shooting, according to every frame master image and the clarity per frame sub-picture, obtain clarity highest Reference master image;By except it is described with reference to master image in addition to the clarity of remaining master image and the clarity per frame sub-picture with The clarity with reference to master image is compared, and detects whether there is the candidate's master image for meeting default screening threshold value and candidate Sub-picture;If detection is known in the presence of candidate's master image and candidate's sub-picture at least described in a frame at least described in a frame, institute is obtained State the image information with reference to master image, per frame candidate master image and per frame candidate's sub-picture, determine first object master image and First object sub-picture;Depth of view information is obtained according to the first object master image and the first object sub-picture.
Another embodiment of the application provides a kind of depth of field acquisition device, including:First acquisition module, for obtaining main camera shooting The multiframe master image of head shooting and the multiframe sub-picture of secondary camera shooting, the second acquisition module, for according to per frame master map It is highest with reference to master image to obtain clarity for picture and the clarity per frame sub-picture;Detection module, it is described with reference to master for that will remove The clarity of remaining master image outside image and the clarity per frame sub-picture and the clarity progress with reference to master image Compare, detect whether to exist to meet and preset screening candidate's master image of threshold value and candidate's sub-picture;3rd acquisition module, for Detection is known in the presence of candidate's master image at least described in a frame and at least described in a frame during candidate's sub-picture, obtains described with reference to master map Picture, the image information per frame candidate master image and per frame candidate's sub-picture, determine first object master image and first object pair Image;4th acquisition module, for obtaining depth of view information according to the first object master image and the first object sub-picture.
The another embodiment of the application provides a kind of acquisition machine equipment, including memory and processor, is stored up in the memory There is acquisition machine readable instruction, when described instruction is performed by the processor so that the processor performs the above-mentioned reality of the application Apply the depth of field acquisition methods described in example.
The application a further embodiment provides a kind of non-transitory and obtains machine readable storage medium storing program for executing, is stored thereon with acquisition machine journey Sequence realizes the depth of field acquisition methods as described in the above embodiments of the present application when the acquisition machine program is executed by processor.
Technical solution provided by the embodiments of the present application can include the following benefits:
The multiframe master image of main camera shooting and the multiframe sub-picture of secondary camera shooting are obtained, is calculated per frame master map Picture and the clarity per frame sub-picture, acquisition clarity is highest with reference to master image, by the clarity of remaining master image and per frame The clarity of sub-picture detects whether there is the candidate for meeting default screening threshold value compared with the clarity with reference to master image Master image and candidate's sub-picture if detection is known in the presence of an at least frame candidate master image and at least frame candidate's sub-picture, obtain Take the image information with reference to master image, per frame candidate master image and per frame candidate's sub-picture, determine first object master image and First object sub-picture, and then, depth of view information is obtained according to first object master image and first object sub-picture.Thereby it is ensured that The quality and uniformity between the image of depth of view information are obtained, improves the accurate rate and imaging effect of the depth of field.
Description of the drawings
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the principle of triangulation schematic diagram according to the application one embodiment;
Fig. 2 is the process schematic that the depth of field is calculated according to the dual camera of the application one embodiment;
Fig. 3 is the flow chart according to the depth of field acquisition methods of the application one embodiment;
Fig. 4 (a) is the schematic diagram of a scenario according to the depth of field acquisition methods of the application one embodiment;
Fig. 4 (b) is the schematic diagram of a scenario according to the depth of field acquisition methods of the application another embodiment;
Fig. 5 is the structure diagram according to the depth of field acquisition device of the application one embodiment;And
Fig. 6 is the schematic diagram according to the image processing circuit of the application one embodiment.
Specific embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the depth of field acquisition methods, device and equipment of the embodiment of the present application are described.Wherein, the application is implemented The depth of field acquisition methods of example, which are suitable for mobile phone, tablet computer, personal digital assistant, Wearable etc., has the hard of dual camera Part equipment, the Wearable can be Intelligent bracelet, smartwatch, intelligent glasses etc..
It should be appreciated that dual camera system calculates the depth of field by master image and sub-picture, in order to more clearly retouch It states how dual camera obtains depth of view information, below with reference to the accompanying drawings illustrates that dual camera obtains the principle of the depth of field:
In practical applications, the human eye explanation depth of field mainly differentiates the depth of field by binocular vision, this is differentiated with dual camera The principle of the depth of field is the same, is mainly realized by the principle of range of triangle as shown in Figure 1, based in Fig. 1, in real space In, depict imaging object and two camera position ORAnd OTAnd the focal plane of two cameras, focal plane away from It is f with a distance from two cameras place planes, is imaged in two cameras of focal plane position, so as to obtains two shootings Image.
Wherein, P and P ' is position of the same target in different captured images respectively.Wherein, P points are apart from place shooting figure The distance of the left border of picture is XR, the distance of left border of the P ' points apart from place captured image is XT。ORAnd OTRespectively two A camera, for the two cameras in same plane, distance is B.
Based on principle of triangulation, the distance between plane Z, has as follows where the object and two cameras in Fig. 1 Relation:
Based on this, can push awayWherein, d is position of the same target in different captured images The distance between put difference.Since B, f are definite value, the distance Z of object can be determined according to d.
Certainly, except triangle telemetry, the depth of field of master image can also be calculated using other modes, for example, master takes the photograph When taking pictures as head and secondary camera for same scene, the distance of object distance camera and main camera and pair in scene The proportional relations such as displacement difference, the posture difference of camera imaging, therefore, can be according to this in one embodiment of the application Kind proportionate relationship obtains above-mentioned distance Z.
For example, as shown in Fig. 2, the sub-picture that the master image and secondary camera that are obtained by main camera obtain, The figure of difference difference is calculated, is represented here with disparity map, what is represented on this figure is the displacement difference of the upper identical point of two figures It is different, but since the displacement difference in triangulation location and Z are directly proportional, many times parallax chart is just directly used as the depth of field Figure.
It is understood based on above analysis, it is necessary to obtain same target in different captured images when dual camera obtains the depth of field Position, therefore, if two images of dual camera for obtaining depth of view information are closer to, the effect of depth of field acquisition can be improved Rate and accuracy rate.
Fig. 3 is according to the flow chart of the depth of field acquisition methods of the application one embodiment, as shown in figure 3, this method includes:
Step 101, the multiframe master image of main camera shooting and the multiframe sub-picture of secondary camera shooting are obtained.
Step 102, according to every frame master image and the clarity per frame sub-picture, it is highest with reference to master map to obtain clarity Picture.
Wherein, the clarity of image refers to the readability at image outline edge, including telling the area between image lines Not, i.e., image level is to the resolution ratio of scenery particle or the fine degree of slight level texture, and resolution ratio is higher, scenery particle Resolution ratio or slight level texture fine degree it is higher, scenery particle performance it is more careful, clarity is higher, clarity It further includes and weighs whether line edge profile is clear, i.e., the actual situation degree on image level profile border, common acutance represent, in fact Matter is the varying width of level bounding gradient density, if varying width is small, sharpness of border, conversely, varying width then border greatly Apprehensive, clarity further includes the readability between tiny level, and the comparison of light and shade or subtle contrast between especially tiny level are It is no clear.
That is, the clarity of image is higher, the easier differentiation such as edge details of image, noise is fewer, according to figure As the efficiency and accuracy that carry out depth of field calculating are higher.
Specifically, in the present embodiment, the multiframe master image of main camera shooting and secondary camera shooting are obtained Multiframe sub-picture calculates the clarity per frame master image and per frame sub-picture, and acquisition clarity is highest to refer to master image, so as to In this is referred to master image as reference, the high image of clarity is filtered out as far as possible as the further image for calculating the depth of field.
Step 103, by except the clarity with reference to remaining master image in addition to master image and the clarity per frame sub-picture with It is compared with reference to the clarity of master image, detects whether there is the candidate's master image for meeting default screening threshold value and the secondary figure of candidate Picture.
Wherein, preset screening threshold value and be used to filter out clarity compared with the higher master image of the clarity with reference to master image And sub-picture, for example, the default screening threshold value is 80%, then it can be filtered out by the default screening threshold value and meet clarity and reach To the image of the clarity more than 80% with reference to master image.Specifically, by the clear of the clarity of remaining master image and sub-picture It spends compared with the clarity with reference to master image, on the basis of the reference master image higher by clarity, detects whether to exist full The default screening candidate's master image of threshold value of foot and candidate's sub-picture, to determine whether the master image and pair that have clarity higher Image.As a result, based on clarity progress candidate's master image of reference picture and determining for candidate's sub-picture, current scene has been considered The ability of taking pictures of lower terminal device, improves the flexibility for filtering out candidate's master image and candidate's sub-picture.
Step 104, if detection is known in the presence of an at least frame candidate master image and at least frame candidate's sub-picture, ginseng is calculated Master image, the image information per frame candidate master image and per frame candidate's sub-picture are examined, determines first object master image and first Target sub-picture.
Step 105, depth of view information is calculated according to first object master image and first object sub-picture.
Specifically, in one embodiment of the application, if detection is known in the presence of an at least frame candidate master image and at least One frame candidate's sub-picture then shows there are the higher master image of clarity and sub-picture, if using the higher candidate of clarity Master image and candidate's sub-picture calculate the depth of field, then can improve depth of field computational efficiency and accuracy.
In practical implementation, calculating that master image and the sub-picture of the depth of field are closer, interference when calculating the depth of field is smaller, The depth of field being calculated is more accurate, therefore, in embodiments herein, obtains with reference to master image, per frame candidate's master image And image information per frame candidate's sub-picture and be compared, wherein, the image information include but not limited to image definition, Brightness, AWB (Automatic white balance, automatic white balance) etc. on the influential information of depth of field calculating, and then, really First object master image and first object sub-picture are made, for example, obtaining the first object master that image difference meets preset condition Image and first object sub-picture to obtain depth of view information according to first object master image and first object sub-picture, are protected as a result, Being accurately calculated for depth of view information is demonstrate,proved so that final imaging effect is preferable.
Wherein, take pictures hardware capabilities and the bat of specifying information and terminal device that above-mentioned preset condition is included with image information Related according to environment, specifying information that terminal device includes is abundanter, terminal device hardware capabilities of taking pictures are poorer and ring of taking pictures Border light is more insufficient, then preset condition is looser, and the value of corresponding image difference is big, for example, in the bat of same terminal device According under hardware capabilities and photo environment, the preset condition set under image definition and the scene of brightness is included for image information It may be to be 10% with reference to master image, every frame candidate master image and per the image information difference of frame candidate's sub-picture, for figure As under scene that information includes image definition the preset condition that sets may be with reference to master image, per frame candidate master image and Image information difference per frame candidate's sub-picture is 15%.
For more clear explanation, how to judge with reference to master image, per frame candidate master image and per frame candidate pair figure Whether the image information difference of the image information of picture meets preset condition, includes a type separately below with image information difference Information and polytype information exemplified by illustrate:
The first example, under the example, the image information of acquisition is a type of information, for example, being that image is bright Degree.
Specifically, in this example, by the image information with reference to master image and per frame candidate's master image successively with every frame The image information of candidate's sub-picture is compared, obtain image information difference minimum two field pictures for first object master image with First sub-picture.
For example, when image information is luminance information, as shown in Fig. 4 (a), main camera and secondary camera are clapped simultaneously It takes the photograph, obtains 4 frame master images and 4 frame sub-pictures, wherein, the number according to 4 frame master image of shooting order is respectively 11,12,13 and The number of 14,4 frame sub-pictures is respectively 21,22,23 and 24, wherein, the highest reference master image of clarity is 11, candidate's master map As respectively 12 and 13, candidate's sub-picture, candidate's sub-picture is respectively 22 and 24.Master image will be referred to and per frame candidate's master image Brightness of image successively compared with the brightness of image of every frame candidate sub-picture, get two frames of brightness of image difference minimum Image is first object master image 12 and first object sub-picture 22, as a result, according to first object master image 12 and first object The calculating of sub-picture 22 depth of view information is more accurate, preferable according to the effect that the depth of view information is ultimately imaged.
Second of example, under the example, the image information of acquisition is polytype information, for example, bright including image Degree, image white balance value and image definition.
Specifically, in this example, weight factor corresponding with each type information is obtained, the weight factor is corresponding Weighted value can also be needed to demarcate by system calibrating by user according to scene, will refer to master image and per frame candidate master Every class image information of image compared with every class image information of every frame candidate sub-picture, obtains per two field pictures it successively Between all kinds of image informations information gap, according to the information gap of all kinds of image informations between every two field pictures and with each type believe Corresponding weight factor is ceased, obtains and two frames of information gap minimum is obtained per the corresponding information gap of two field pictures as first object master Image and first object sub-picture.
For example, when image information is brightness of image, AWB and SOF, and corresponding weight factor is respectively 50%, 20% and 30%, as shown in Fig. 4 (b), main camera and secondary camera are shot simultaneously, obtain 4 frame master images and 4 frame sub-pictures, Be respectively wherein, 21 according to the number that the number of 4 frame master image of shooting order is respectively 11,12,13 and 14,4 frame sub-pictures, 22nd, 23 and 24, wherein, highest clarity with reference to master image is 11, and candidate's master image is respectively 12 and 13, candidate's sub-picture, Candidate's sub-picture is respectively 22 and 24.By brightness of image, AWB and SOF with reference to master image and per frame candidate's master image respectively according to Compared with the secondary brightness of image with per frame candidate's sub-picture, get the brightness of image with reference to master image 11, AWB and SOF with The information gap of candidate's sub-picture 22 is respectively a1, a2, a3, then gets the information gap with reference to master image 11 and candidate's sub-picture 22 B1=a1*50%+a2*20%+a*30%, and so on, the letter with reference to master image 11 and candidate's sub-picture 24 is got respectively Cease difference b2, candidate's master image 12 and the information gap b3 of candidate's sub-picture 22, candidate's master image 12 and the information of candidate's sub-picture 24 Poor b4, candidate's master image 13 and the information gap b5 of candidate's sub-picture 22, candidate's master image 13 and the information gap of candidate's sub-picture 24 B6, and then, two frames of information gap minimum are got as first object master image 12 and first object sub-picture 22, as a result, basis First object master image 12 and the calculating of first object sub-picture 22 depth of view information are more accurate, are ultimately imaged according to the depth of view information Effect it is preferable.
Based on above example, it is necessary to which explanation, the concrete type of image information can be by the scene information shot and bat It takes the photograph depending on the one or more in pattern, for example, light luminance is poor in scene information, then the multiframe master image shot and secondary figure As second-rate, determine to calculate the first object master image of the depth of field and first object sub-picture only according to a kind of image information, Possible reliability is not high, it is then desired to consider polytype image information to determine first object master image and first object Sub-picture, for another example, light luminance is preferable in scene information, then the multiframe master image and sub-picture quality shot is higher, only Determine to calculate the first object master image of the depth of field according to a kind of image information and first object sub-picture reliability cry it is high, thus, In order to improve image processing efficiency, a type of image information can be considered to determine first object master image and first object Sub-picture.
For another example, current shooting pattern shoots for night scene, then to the more demanding of luminance information, light luminance is poor, then The multiframe master image and sub-picture of shooting are second-rate, determine to calculate the first object master of the depth of field only according to a kind of image information Image and first object sub-picture, possible reliability is not high, it is then desired to consider polytype image information to determine first Target master image and first object sub-picture for another example, under strong light screening-mode, are easiest to cause a problem in that overexposure, thus, In order to improve image processing efficiency, a type of AWB information can be considered to determine first object master image and first object Sub-picture.
Specifically, in this example, detect shooting scene information and/or, screening-mode, and then, believed according to scene Breath and/or, screening-mode determines image information type to be obtained, for example, can prestore scene information and/or, clap Take the photograph the correspondence of pattern and image information type, and then, know current scene information and/or, after screening-mode, inquiry The correspondence gets corresponding image information type.
Wherein, it is emphasized that, in above-described embodiment, including individually scene information or screening-mode being used to determine image The realization method of the type of information also includes the realization for determining the type of image information using scene information and screening-mode simultaneously Mode.
In one embodiment of the application, if detection is known there is no candidate's master image or candidate's sub-picture, i.e., its Remaining master image and the clarity of width image may be than relatively low, at this point, using the higher reference master image of clarity as the acquisition depth of field One two field picture of information obtains the image information with reference to master image and per frame sub-picture and is compared, obtains image information Difference meets the second target sub-picture of preset condition, which is a frame closest with reference master image Sub-picture, and then, depth of view information is obtained with reference to master image and the second target sub-picture with this.
The depth of field acquisition methods of the embodiment of the present application as a result, by shooting the major and minor image of multiframe, therefrom select master image Clarity is good and major and minor image definition is higher and image information is tried one's best a close group picture piece, for calculating the depth of field and final Imaging, can so be accurately calculated the depth of field, while can guarantee imaging definition again, and final imaging effect is more preferable.
In conclusion the depth of field acquisition methods of the embodiment of the present application, obtain main camera shooting multiframe master image and The multiframe sub-picture of secondary camera shooting, calculates the clarity per frame master image and per frame sub-picture, it is highest to obtain clarity With reference to master image, the clarity of remaining master image and the clarity per frame sub-picture are compared with the clarity with reference to master image Compared with detecting whether to exist and meet default screening candidate's master image of threshold value and candidate's sub-picture, if detection is known in the presence of at least one Frame candidate master image and at least frame candidate's sub-picture are then obtained with reference to master image, per frame candidate master image and per frame candidate The image information of sub-picture determines first object master image and first object sub-picture, and then, according to first object master image and First object sub-picture obtains depth of view information.Thereby it is ensured that quality and uniformity between obtaining the image of depth of view information, carry The high accurate rate and imaging effect of the depth of field.
In order to realize above-described embodiment, the application also proposed a kind of depth of field acquisition device, and Fig. 5 is according to the application one The structure diagram of the depth of field acquisition device of embodiment, as shown in figure 5, the depth of view information acquisition device includes the first acquisition module 100th, the second acquisition module 200,500 and the 4th acquisition module of detection module 300, the 3rd acquisition module 400 and determining module 600。
Wherein, the first acquisition module 100, for obtaining the multiframe master image of main camera shooting and secondary camera shooting Multiframe sub-picture.
Second acquisition module 200, for according to the clarity per frame master image and per frame sub-picture, obtaining clarity highest Reference master image.
Detection module 300, for by the clarity of remaining master image in addition to reference to master image and per frame sub-picture Clarity compared with the clarity with reference to master image, detect whether to exist candidate's master image for meeting default screening threshold value and Candidate's sub-picture.
3rd acquisition module 400, for knowing in detection in the presence of an at least frame candidate master image and at least frame candidate's pair During image, the image information with reference to master image, per frame candidate master image and per frame candidate's sub-picture is obtained.
Determining module 500, for determining first object master image and first object sub-picture.
4th acquisition module 600, for obtaining depth of view information according to first object master image and first object sub-picture.
Wherein, in one embodiment of the application, when the image information of acquisition is a kind of type information, second obtains Module 200 is specifically used for the image information with reference to master image and per the frame candidate's master image successively figure with every frame candidate sub-picture As information is compared, the two field pictures of image information difference minimum are obtained as first object master image and first object is secondary schemes Picture.
4th acquisition module 600, for obtaining depth of view information according to first object master image and first object sub-picture.
In one embodiment of the application, the 3rd acquisition module 400 is additionally operable to know that there is no candidate's master maps in detection When picture or candidate's sub-picture, the difference of image information in multiframe sub-picture and the image information with reference to master image is met into default item The sub-picture of part is as the second target sub-picture.
4th acquisition module 600 is additionally operable to obtain depth of view information according to reference to master image and the second target sub-picture.
It should be noted that the foregoing description to embodiment of the method, is also applied for the device of the embodiment of the present application, realizes Principle is similar, and details are not described herein.
The division of modules is only used for for example, in other embodiments in above-mentioned depth of field acquisition device, can be by scape Deep acquisition device is divided into different modules as required, to complete all or part of function of above-mentioned depth of field acquisition device.
In conclusion the depth of field acquisition device of the embodiment of the present application, obtain main camera shooting multiframe master image and The multiframe sub-picture of secondary camera shooting, calculates the clarity per frame master image and per frame sub-picture, it is highest to obtain clarity With reference to master image, the clarity of remaining master image and the clarity per frame sub-picture are compared with the clarity with reference to master image Compared with detecting whether to exist and meet default screening candidate's master image of threshold value and candidate's sub-picture, if detection is known in the presence of at least one Frame candidate master image and at least frame candidate's sub-picture are then obtained with reference to master image, per frame candidate master image and per frame candidate The image information of sub-picture determines first object master image and first object sub-picture, and then, according to first object master image and First object sub-picture obtains depth of view information.Thereby it is ensured that quality and uniformity between obtaining the image of depth of view information, carry The high accurate rate and imaging effect of the depth of field.
In order to realize above-described embodiment, the application also proposed a kind of computer equipment, wherein, computer equipment is to include The arbitrary equipment of the processor of memory comprising storage computer program and operation computer program, for example, can be intelligence Mobile phone, PC etc., above computer equipment include image processing circuit, image processing circuit can utilize hardware and/ Or component software is realized, it may include defines the various of ISP (Image Signal Processing, picture signal processing) pipeline Processing unit.Fig. 6 is the schematic diagram of image processing circuit in one embodiment.As shown in fig. 6, for purposes of illustration only, only show with The various aspects of the relevant image processing techniques of the embodiment of the present application.
As shown in fig. 6, image processing circuit includes ISP processors 640 and control logic device 650.Imaging device 610 captures Image data handled first by ISP processors 640, ISP processors 640 image data is analyzed to capture can be used for it is true The image statistics of fixed and/or imaging device 610 one or more control parameters.Imaging device 610 (camera) can wrap The camera with one or more lens 612 and imaging sensor 614 is included, wherein, in order to implement the background blurring of the application Processing method, imaging device 610 include two groups of cameras, wherein, with continued reference to Fig. 6, imaging device 610 can be based on main camera Photographed scene image, imaging sensor 614 may include colour filter array (such as Bayer filters), image simultaneously with secondary camera Sensor 614 can obtain the luminous intensity and wavelength information captured with each imaging pixel of imaging sensor 614, and provide can be by One group of raw image data of the processing of ISP processors 640.Sensor 620 can be based on 620 interface type of sensor original image Data are supplied to ISP processors 640, wherein, the image in the main camera that ISP processors 640 can be provided based on sensor 620 The raw image data that imaging sensor 614 in raw image data and secondary camera that sensor 614 obtains obtains calculates Depth of view information etc..620 interface of sensor can utilize SMIA (Standard Mobile Imaging Architecture, mark The mobile Imager Architecture of standard) interface, other serial or parallel utilizing camera interfaces or above-mentioned interface combination.
ISP processors 640 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 640 can carry out raw image data at one or more images Reason operation, statistical information of the collection on image data.Wherein, image processing operations can be by identical or different bit depth precision It carries out.
ISP processors 640 can also receive pixel data from video memory 630.For example, from 620 interface of sensor by original Beginning pixel data is sent to video memory 630, and the raw pixel data in video memory 630 is available to ISP processors 640 is for processing.Video memory 630 can be independent in a part, storage device or electronic equipment for memory device Private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 620 interface of sensor or from video memory 630, ISP processing Device 640 can carry out one or more image processing operations, such as time-domain filtering.Image data that treated can be transmitted to be stored to image Device 630, to carry out other processing before shown.ISP processors 640 receive processing data from video memory 630, And the processing data are carried out with the image real time transfer in original domain and in RGB and YCbCr color spaces.Treated schemes As data may be output to display 670, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, graphics processor) it is further processed.In addition, the output of ISP processors 640 also can be transmitted to video memory 630, and Display 670 can read image data from video memory 630.In one embodiment, video memory 630 can be configured as Realize one or more frame buffers.In addition, the output of ISP processors 640 can be transmitted to encoder/decoder 660, to compile Code/decoding image data.The image data of coding can be saved, and be decompressed before being shown in 670 equipment of display.It compiles Code device/decoder 660 can be realized by CPU or GPU or coprocessor.
The definite statistics of ISP processors 640, which can be transmitted, gives control logic device Unit 650.For example, statistics can wrap Include the image sensings such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 612 shadow correction of lens 614 statistical information of device.Control logic device 650 may include the processor and/or micro-control that perform one or more routines (such as firmware) Device processed, one or more routines according to the statistics of reception, can determine imaging device 610 control parameter and control ginseng Number.For example, control parameter may include that 620 control parameter of sensor (such as gain, time of integration of spectrum assignment), camera are dodged The combination of photocontrol parameter, 612 control parameter of lens (such as focusing or zoom focal length) or these parameters.ISP control parameters It may include the gain level and color correction matrix for automatic white balance and color adjustment (for example, during RGB processing), with And 612 shadow correction parameter of lens.
It is the step of realizing depth of field acquisition methods with image processing techniques in Fig. 6 below:
Obtain the multiframe master image of main camera shooting and the multiframe sub-picture of secondary camera shooting;
According to every frame master image and the clarity per frame sub-picture, it is highest with reference to master image to obtain clarity;
By except it is described with reference to master image in addition to the clarity of remaining master image and the clarity per frame sub-picture with it is described It is compared with reference to the clarity of master image, detects whether there is the candidate's master image for meeting default screening threshold value and the secondary figure of candidate Picture;
If detection is known in the presence of candidate's master image and candidate's sub-picture at least described in a frame at least described in a frame, institute is obtained State the image information with reference to master image, per frame candidate master image and per frame candidate's sub-picture, determine first object master image and First object sub-picture;
Depth of view information is obtained according to the first object master image and the first object sub-picture.
In order to realize above-described embodiment, the application also proposes a kind of non-transitorycomputer readable storage medium, when described Instruction in storage medium is performed by processor, enabling performs the depth of field acquisition methods as described in above-described embodiment.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms is not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It is combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the different embodiments described in this specification or example and different embodiments or exemplary feature It closes and combines.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include Module, segment or the portion of the code of the executable instruction of one or more the step of being used to implement custom logic function or process Point, and the scope of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be by the application Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction Row system, device or equipment instruction fetch and the system executed instruction) it uses or combines these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment It puts.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or if necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combination thereof.Above-mentioned In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be performed with storage Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With for data-signal realize logic function logic gates from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, one or a combination set of the step of including embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, it can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be employed in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and is independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although it has been shown and retouches above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of application Type.

Claims (10)

1. a kind of depth of field acquisition methods, which is characterized in that including:
Obtain the multiframe master image of main camera shooting and the multiframe sub-picture of secondary camera shooting;
According to every frame master image and the clarity per frame sub-picture, it is highest with reference to master image to obtain clarity;
By except it is described with reference to master image in addition to the clarity of remaining master image and the clarity per frame sub-picture and the reference The clarity of master image is compared, and is detected whether to exist to meet and is preset screening candidate's master image of threshold value and candidate's sub-picture;
If detection is known in the presence of candidate's master image and candidate's sub-picture at least described in a frame at least described in a frame, according to the ginseng Master image, the image information per frame candidate master image and per frame candidate's sub-picture are examined,
Determine first object master image and first object sub-picture;
Depth of view information is obtained according to the first object master image and the first object sub-picture.
2. the method as described in claim 1, which is characterized in that detect whether there is the time for meeting default screening threshold value described After selecting master image and candidate's sub-picture, further include:
If detection is known there is no candidate's master image or candidate's sub-picture, image in the multiframe sub-picture is believed Breath and the difference of the image information with reference to master image meet the sub-picture of preset condition as the second target sub-picture;
According to described depth of view information is obtained with reference to master image and the second target sub-picture.
3. the method as described in claim 1, which is characterized in that when the image information of acquisition is a kind of type information, wherein, The type information includes brightness of image or, image white balance value or, image resolution ratio,
It is described to obtain the image information with reference to master image, per frame candidate master image and per frame candidate's sub-picture and compared Compared with, it obtains image information difference and meets the first object master image of preset condition and first object sub-picture, including:
By the image information with reference to master image and per the frame candidate's master image successively image information with every frame candidate sub-picture It is compared, the two field pictures for obtaining image information difference minimum are first object master image and first object sub-picture.
4. the method as described in claim 1, which is characterized in that when the image information of acquisition is polytype information,
The image information difference for obtaining described image information, determines that described image information gap meets the first of preset condition Target master image and first object sub-picture, including:
Obtain weight factor corresponding with each type information;
By every class image information with reference to master image and per frame candidate's master image successively every class with every frame candidate sub-picture Image information is compared, the information gap of all kinds of image informations between obtaining per two field pictures;
According to the information gap of all kinds of image informations between every two field pictures and weight factor corresponding with each type information, obtain The corresponding information gap of every two field pictures is taken, obtains two frames of information gap minimum as first object master image and the secondary figure of first object Picture.
5. the method as described in claim 1, which is characterized in that further include:
Detect shooting scene information and/or, screening-mode;
The type of described image information to be obtained is determined according to the scene information and/or screening-mode.
6. a kind of depth of field acquisition device, which is characterized in that including:
First acquisition module, for obtaining the secondary figure of multiframe of the multiframe master image of main camera shooting and secondary camera shooting Picture;
Second acquisition module, for according to the clarity per frame master image and per frame sub-picture, obtaining the highest reference of clarity Master image;
Detection module, for will except it is described with reference to master image in addition to the clarity of remaining master image and clear per frame sub-picture Degree with the clarity with reference to master image compared with, detect whether to exist the candidate's master image for meeting default screening threshold value with Candidate's sub-picture;
3rd acquisition module, for knowing in detection in the presence of candidate's master image and candidate's pair at least described in a frame at least described in a frame During image, the image information with reference to master image, per frame candidate master image and per frame candidate's sub-picture is obtained;
Determining module, for obtaining definite first object master image and first object sub-picture;
4th acquisition module, for obtaining depth of view information according to the first object master image and the first object sub-picture.
7. device as claimed in claim 6, which is characterized in that
3rd acquisition module is additionally operable to know there is no when candidate's master image or candidate's sub-picture in detection, The difference of image information in the multiframe sub-picture and the image information with reference to master image is met the secondary of preset condition to scheme As being used as the second target sub-picture;
4th acquisition module is additionally operable to obtain depth of field letter with reference to master image and the second target sub-picture according to described Breath.
8. device as claimed in claim 6, which is characterized in that when the image information of acquisition is a kind of type information, wherein, The type information includes brightness of image or, image white balance value or, image resolution ratio,
Second acquisition module is specifically used for:
By the image information with reference to master image and per the frame candidate's master image successively image information with every frame candidate sub-picture It is compared, the two field pictures for obtaining image information difference minimum are first object master image and first object sub-picture.
9. a kind of acquisition machine equipment, which is characterized in that including memory, processor and storage on a memory and can be in processor The acquisition machine program of upper operation when the processor performs described program, realizes the depth of field as described in any in claim 1-5 Acquisition methods.
10. a kind of acquisition machine readable storage medium storing program for executing is stored thereon with acquisition machine program, which is characterized in that the program is by processor The depth of field acquisition methods as described in any in claim 1-5 are realized during execution.
CN201711243742.7A 2017-11-30 2017-11-30 Depth of field acquisition method, device and equipment Active CN108053438B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711243742.7A CN108053438B (en) 2017-11-30 2017-11-30 Depth of field acquisition method, device and equipment
PCT/CN2018/116474 WO2019105260A1 (en) 2017-11-30 2018-11-20 Depth of field obtaining method, apparatus and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711243742.7A CN108053438B (en) 2017-11-30 2017-11-30 Depth of field acquisition method, device and equipment

Publications (2)

Publication Number Publication Date
CN108053438A true CN108053438A (en) 2018-05-18
CN108053438B CN108053438B (en) 2020-03-06

Family

ID=62121752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711243742.7A Active CN108053438B (en) 2017-11-30 2017-11-30 Depth of field acquisition method, device and equipment

Country Status (2)

Country Link
CN (1) CN108053438B (en)
WO (1) WO2019105260A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108900766A (en) * 2018-06-15 2018-11-27 北京华捷艾米科技有限公司 A kind of panorama camera of the automatic enhancement device of panoramic picture and method and the application device
CN109754439A (en) * 2019-01-17 2019-05-14 Oppo广东移动通信有限公司 Scaling method, device, electronic equipment and medium
WO2019105260A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Depth of field obtaining method, apparatus and device
CN110310515A (en) * 2019-04-23 2019-10-08 绿桥(泰州)生态修复有限公司 Field data Recognition feedback system
CN115829911A (en) * 2022-07-22 2023-03-21 宁德时代新能源科技股份有限公司 Method, apparatus and computer storage medium for detecting imaging consistency of a system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040080661A1 (en) * 2000-12-22 2004-04-29 Sven-Ake Afsenius Camera that combines the best focused parts from different exposures to an image
CN105957053A (en) * 2016-04-19 2016-09-21 深圳创维-Rgb电子有限公司 Two-dimensional image depth-of-field generating method and two-dimensional image depth-of-field generating device
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device
CN106851124A (en) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 Image processing method, processing unit and electronic installation based on the depth of field
CN106954020A (en) * 2017-02-28 2017-07-14 努比亚技术有限公司 A kind of image processing method and terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763477B (en) * 2014-02-21 2016-06-08 上海果壳电子有限公司 A kind of dual camera claps back focusing imaging device and method
TWI543615B (en) * 2014-07-17 2016-07-21 華碩電腦股份有限公司 Image processing method and electronic apparatus using the same
CN108053438B (en) * 2017-11-30 2020-03-06 Oppo广东移动通信有限公司 Depth of field acquisition method, device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040080661A1 (en) * 2000-12-22 2004-04-29 Sven-Ake Afsenius Camera that combines the best focused parts from different exposures to an image
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device
CN105957053A (en) * 2016-04-19 2016-09-21 深圳创维-Rgb电子有限公司 Two-dimensional image depth-of-field generating method and two-dimensional image depth-of-field generating device
CN106954020A (en) * 2017-02-28 2017-07-14 努比亚技术有限公司 A kind of image processing method and terminal
CN106851124A (en) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 Image processing method, processing unit and electronic installation based on the depth of field

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019105260A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Depth of field obtaining method, apparatus and device
CN108900766A (en) * 2018-06-15 2018-11-27 北京华捷艾米科技有限公司 A kind of panorama camera of the automatic enhancement device of panoramic picture and method and the application device
CN109754439A (en) * 2019-01-17 2019-05-14 Oppo广东移动通信有限公司 Scaling method, device, electronic equipment and medium
CN109754439B (en) * 2019-01-17 2023-07-21 Oppo广东移动通信有限公司 Calibration method, calibration device, electronic equipment and medium
CN110310515A (en) * 2019-04-23 2019-10-08 绿桥(泰州)生态修复有限公司 Field data Recognition feedback system
CN110310515B (en) * 2019-04-23 2020-11-03 绍兴越元科技有限公司 On-site information identification feedback system
CN115829911A (en) * 2022-07-22 2023-03-21 宁德时代新能源科技股份有限公司 Method, apparatus and computer storage medium for detecting imaging consistency of a system

Also Published As

Publication number Publication date
WO2019105260A1 (en) 2019-06-06
CN108053438B (en) 2020-03-06

Similar Documents

Publication Publication Date Title
JP7003238B2 (en) Image processing methods, devices, and devices
CN107959778B (en) Imaging method and device based on dual camera
JP7015374B2 (en) Methods for image processing using dual cameras and mobile terminals
CN108055452B (en) Image processing method, device and equipment
CN107977940A (en) background blurring processing method, device and equipment
CN108111749B (en) Image processing method and device
CN107835372A (en) Imaging method, device, mobile terminal and storage medium based on dual camera
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108154514B (en) Image processing method, device and equipment
JP6903816B2 (en) Image processing method and equipment
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108024056B (en) Imaging method and device based on dual camera
CN108053438A (en) Depth of field acquisition methods, device and equipment
CN107948520A (en) Image processing method and device
CN107945105A (en) Background blurring processing method, device and equipment
CN107493432A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN108712608A (en) Terminal device image pickup method and device
CN108053363A (en) Background blurring processing method, device and equipment
CN108156369A (en) Image processing method and device
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN108024057A (en) Background blurring processing method, device and equipment
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN108052883A (en) User's photographic method, device and equipment
CN107454335A (en) Image processing method, device, computer-readable recording medium and mobile terminal
CN109040598B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant