CN110324532A - A kind of image weakening method, device, storage medium and electronic equipment - Google Patents
A kind of image weakening method, device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN110324532A CN110324532A CN201910603273.8A CN201910603273A CN110324532A CN 110324532 A CN110324532 A CN 110324532A CN 201910603273 A CN201910603273 A CN 201910603273A CN 110324532 A CN110324532 A CN 110324532A
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- view information
- target
- frame number
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/958—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
- H04N23/959—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the present application discloses a kind of image weakening method, device, storage medium and electronic equipment, the described method includes: obtaining main camera in the dual camera of terminal is directed to the first image of target area shooting, and obtain the second image that secondary camera in dual camera described in synchronization is directed to target area shooting;Based on the first image and second image, the initial depth of view information of the first image is obtained;The focal distance that the secondary camera is adjusted based on the initial depth of view information obtains the multiframe third image under the multiple focal distances of the secondary camera respectively after the adjustment for target area shooting;The target depth of view information of the first image is obtained based on the multiframe third image, virtualization processing is carried out to the first image based on the target depth of view information.Therefore, using the embodiment of the present application, depth of field computational accuracy can be improved, and then the accuracy of image virtualization can be improved.
Description
Technical field
This application involves field of computer technology more particularly to a kind of image weakening method, device, storage medium and electronics
Equipment.
Background technique
With the development of mobile device technology and image taking technology, the virtualization technology of image captured by dual camera
It receives significant attention.The principle of so-called dual camera virtualization refers to that using two cameras, the processing that take pictures is blurred;
Wherein, a camera is responsible for imaging, and another camera is used to calculate the depth of field, the so-called depth of field be exactly calculate it is captured
The distance of each pixel unit or region distance camera lens in image.Finally, being carried out at subsequent software according to the difference of distance
It manages and achievees the effect that virtualization.
The double of mainstream take the photograph at present, export an image respectively using major-minor camera, according to the phase difference of two images into
The row depth of field calculates, and after obtaining depth of view information, carries out virtualization processing.And since the image that major-minor camera is exported is 2D figure
Picture, itself has had lost part depth of view information, insufficient so as to cause depth of view information precision calculated, thereby reduces basis
The accuracy of depth of view information progress image virtualization.
Summary of the invention
The embodiment of the present application provides a kind of image weakening method, device, storage medium and electronic equipment, and scape can be improved
Deep computational accuracy, and then the accuracy of image virtualization can be improved.The technical solution is as follows:
In a first aspect, the embodiment of the present application provides a kind of image weakening method, which comprises
It obtains main camera in the dual camera of terminal and is directed to the first image of target area shooting, and obtain synchronization
Secondary camera is directed to the second image of target area shooting in the dual camera;
Based on the first image and second image, the initial depth of view information of the first image is obtained;
The focal distance that the secondary camera is adjusted based on the initial depth of view information is obtained the secondary camera and existed respectively
For the multiframe third image of target area shooting under multiple focal distances adjusted;
The target depth of view information of the first image is obtained based on the multiframe third image, is believed based on the target depth of field
Breath carries out virtualization processing to the first image.
Second aspect, the embodiment of the present application provide a kind of image virtualization device, and described device includes:
Image collection module, main camera is directed to the first figure of target area shooting in the dual camera for obtaining terminal
Picture, and obtain the second image that secondary camera in dual camera described in synchronization is directed to target area shooting;
The initial depth of field obtains module, for being based on the first image and second image, obtains first figure
The initial depth of view information of picture;
Focussing module is obtained for being adjusted the focal distance of the secondary camera based on the initial depth of view information
For the multiframe third image of target area shooting under the multiple focal distances of the pair camera respectively after the adjustment;
Image blurring module, for obtaining the target depth of view information of the first image based on the multiframe third image,
Virtualization processing is carried out to the first image based on the target depth of view information.
The third aspect, the embodiment of the present application provide a kind of computer storage medium, and the computer storage medium is stored with
A plurality of instruction, described instruction are suitable for being loaded by processor and executing above-mentioned method and step.
Fourth aspect, the embodiment of the present application provide a kind of electronic equipment, it may include: processor and memory;Wherein, described
Memory is stored with computer program, and the computer program is suitable for being loaded by the processor and being executed above-mentioned method step
Suddenly.
The technical solution bring beneficial effect that some embodiments of the application provide includes at least:
In the embodiment of the present application, main camera is shot simultaneously for target area in the dual camera by obtaining terminal
The first image and the second image, be based on the first image and second image, obtain the first image just
Beginning depth of view information is then based on the initial depth of view information and adjusts the focal distance of the secondary camera, then obtains the pair and take the photograph
As the multiframe third image shot under the multiple focal distances of head respectively after the adjustment for the target area, it is based on this multiframe
Third image calculates the target depth of view information of the first image, is finally carried out according to target depth of view information to the first image
Virtualization processing.Initial depth of view information is obtained by the first frame image that two cameras are shot, to take the photograph the focal distance of shooting to pair
It is adjusted, is calculated more so as to control secondary camera shoot multi-frame images near the corresponding main body of initial depth of view information
Accurate depth of view information, and then the accuracy of image virtualization can be improved.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is a kind of example schematic of implement scene provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram of image weakening method provided by the embodiments of the present application;
Fig. 3 is a kind of example schematic of depth image provided by the embodiments of the present application;
Fig. 4 a-4c is a kind of example schematic of focal distance setting effect provided by the embodiments of the present application;
Fig. 5 is a kind of example schematic of focal distance setting effect provided by the embodiments of the present application;
Fig. 6 is a kind of flow diagram of image weakening method provided by the embodiments of the present application;
Fig. 7 is a kind of schematic diagram using the initial depth of view information of phase difference calculating provided by the embodiments of the present application;
Fig. 8 is a kind of schematic diagram of depth of field distance provided by the embodiments of the present application;
Fig. 9 is a kind of shooting effect schematic diagram of multiframe third image provided by the embodiments of the present application;
Figure 10 a is a kind of practical object distance schematic diagram calculation provided by the embodiments of the present application;
Figure 10 b is a kind of practical object distance schematic diagram calculation provided by the embodiments of the present application;
Figure 10 c is a kind of practical object distance schematic diagram calculation provided by the embodiments of the present application;
Figure 11 is a kind of structural schematic diagram of image virtualization device provided by the embodiments of the present application;
Figure 12 is a kind of structural schematic diagram of image virtualization device provided by the embodiments of the present application;
Figure 13 is a kind of structural schematic diagram of Focussing module provided by the embodiments of the present application;
Figure 14 is a kind of structural schematic diagram of Focussing module provided by the embodiments of the present application;
Figure 15 is a kind of structural schematic diagram of image blurring module provided by the embodiments of the present application;
Figure 16 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the embodiment of the present application
Mode is described in further detail.
In the following description when referring to the accompanying drawings, unless otherwise indicated, the same numbers in different attached drawings indicate same or similar
Element.Embodiment described in following exemplary embodiment does not represent all embodiment party consistent with the application
Formula.On the contrary, they are only the consistent device and method of as detailed in the attached claim, the application some aspects
Example.
In the description of the present application, it is to be understood that term " first ", " second " etc. are used for description purposes only, without
It can be interpreted as indication or suggestion relative importance.For the ordinary skill in the art, on being understood with concrete condition
State the concrete meaning of term in this application.In addition, unless otherwise indicated, " multiple " refer to two in the description of the present application
Or it is more than two."and/or" describes the incidence relation of affiliated partner, indicates may exist three kinds of relationships, for example, A and/or B,
Can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.Character "/" typicallys represent forward-backward correlation pair
As if a kind of relationship of "or".
Referring to Figure 1, it is a kind of implement scene schematic diagram provided by the embodiments of the present application, is set as shown in Figure 1, user uses
It is equipped with the 100 photographic subjects region 200 of terminal of main camera 1000 and secondary camera 2000.
The terminal 100 includes but is not limited to: PC, handheld device, mobile unit, wearable is set tablet computer
It is standby, calculate equipment or the other processing equipments for being connected to radio modem etc..Terminal can be called in different networks
Different titles, such as: user equipment, access terminal, subscriber unit, subscriber station, movement station, mobile station, remote station, long-range end
End, mobile device, user terminal, terminal, wireless telecom equipment, user agent or user apparatus, cellular phone, wireless phone,
Terminal device in personal digital assistant (personal digital assistant, PDA), 5G network or future evolution network
Deng.
As shown in Figure 1, being shot when terminal 100 receives the shooting instruction of user's triggering by main camera 1000
The second image is obtained to the first image, and by the secondary shooting of camera 2000.
At this point, two cameras are using captured by identical time for exposure and focal distance.The time for exposure and
Focal distance can be arranged for system, or user setting.
After the completion of shooting, terminal 100 is according to the first image and the pertinent image information (such as phase difference) of the second image
And focus information, calculate the initial depth of view information of the first image or the second image, i.e. in the first image or the second image
Actual range between each object (such as personage, scenery) and camera.Since the actual range only relies only on two camera institutes
One image of shooting is calculated, and the first image and the second image are 2D image, does not include spatial information, obtained by causing
Depth of field precision it is insufficient.But it can be with the scape of the distance between main body and camera included in statistical picture and whole image
Deep distribution span.
Then the focal distance of secondary camera 2000 is adjusted based on the initial depth of view information again, and adjusts the pair simultaneously and takes the photograph
As first 2000 time for exposure, the secondary camera 2000 time for exposure respectively after the adjustment and adjusted is then controlled
For the multiframe third image of the same target area shooting under multiple focal distances;
Wherein, when the focusing speed of secondary camera is slower, the initial depth of view information includes the mesh of the first image
Focal distance is marked, terminal 100 obtains the target focal distance of the first presupposition multiple and the mesh of the second presupposition multiple
Focal distance is marked, then described in the target focal distance of first presupposition multiple and second presupposition multiple
The focal distance of default frame number is inserted between target focal distance, the default frame number is the difference of the second shooting frame number and 2
Value, then by the target focal distance of first presupposition multiple, the focal distance for the default frame number being inserted into and described
The focal distance of second presupposition multiple, successively as the secondary camera multiple focal distances adjusted.It then will be secondary
The corresponding motor of camera, which adjusts separately, to be shot at this multiple focal distance and (is taken the photograph with pair and shot in main body attachment), often
A mobile focal distance just shoots a frame third image, to obtain multiframe third image.Specific frame number is the bat of main camera
The time for exposure t/ pair camera time for exposure t0 adjusted of the main camera of frame number N* for the first image taken the photograph.
When the focusing speed of secondary camera is very fast, the initial depth of view information includes the maximal field depth distance and minimum scape
Depth image between the maximal field depth distance and the minimum depth of field distance is divided into default by deep distance, terminal 100
Several paragraphs, the predetermined number are equal to the second shooting frame number, then with the section of the secondary camera to the predetermined number
The distance between midpoint of each paragraph is the secondary camera multiple focal distances adjusted in falling.Then by secondary camera
Corresponding motor adjusts separately to be shot at this multiple focal distance, and one focal distance of every movement just shoots a frame third
Image, to obtain multiframe third image.Specific frame number is the main camera of frame number N* of the first image of main camera shooting
Time for exposure t/ pair camera time for exposure t0 adjusted.
Finally, terminal 100 obtains the target depth of view information of the first image based on the multiframe third image, it is based on institute
It states target depth of view information and virtualization processing is carried out to the first image.
Wherein, gradient algorithm can be used, edge extracting processing is carried out to the multiframe third image respectively, obtain described more
Then each image-region of frame third image calculates the acutance maximum value in each region in the multiframe third image, according to institute
The acutance maximum value for stating each region obtains the target depth of view information of the first image, the relatively initial depth of field of depth of view information at this time
For information, more accurately, and then further according to the target depth of view information in the first image prospect or background area blur
After processing, virtualization effect is also more acurrate.
In the embodiment of the present application, main camera is shot simultaneously for target area in the dual camera by obtaining terminal
The first image and the second image, be based on the first image and second image, obtain the first image just
Beginning depth of view information is then based on the initial depth of view information and adjusts the focal distance of the secondary camera, then obtains the pair and take the photograph
As the multiframe third image shot under the multiple focal distances of head respectively after the adjustment for the target area, it is based on this multiframe
Third image calculates the target depth of view information of the first image, is finally carried out according to target depth of view information to the first image
Virtualization processing.Initial depth of view information is obtained by the first frame image that two cameras are shot, to take the photograph the focal distance of shooting to pair
It is adjusted, is calculated more so as to control secondary camera shoot multi-frame images near the corresponding main body of initial depth of view information
Accurate depth of view information, and then the accuracy of image virtualization can be improved.
Below in conjunction with attached drawing 2- attached drawing 10, describe in detail to image weakening method provided by the embodiments of the present application.
This method can be dependent on computer program realization, can run on the image virtualization device based on von Neumann system.The calculating
Machine program can integrate in the application, also can be used as independent tool-class application operation.Wherein, the image in the embodiment of the present application is empty
It can be terminal shown in FIG. 1 that makeup, which is set,.
Fig. 2 is referred to, is a kind of flow diagram of image weakening method provided by the embodiments of the present application.As shown in Figure 1,
The embodiment of the present application the method may include following steps:
S101 obtains the first image that main camera in the dual camera of terminal is directed to target area shooting, and obtains same
Secondary camera is directed to the second image of target area shooting in dual camera described in one moment;
The dual camera can be simultaneously front camera arranged side by side or arranged side by side left and right up and down, can also simultaneously be upper and lower
Rear camera arranged side by side or arranged side by side left and right.Certainly, two cameras can be fixing camera or rotatable camera.
Currently, dual camera mainly divides following three kinds of combined situations:
1) black and white+color combinations, black and white camera charge capture to more details, can allow mobile phone photograph effect more
Add outstanding.
2) colour+color combinations, two cameras are taken pictures simultaneously, can not only record the depth of field data of object, moreover it is possible to be had double
Light-inletting quantity again.
3) wide-angle+focal length combination, this combined dual camera are divided into major-minor, and main camera is wide-angle camera,
It is responsible for imaging, secondary camera is focal length camera, is responsible for the numerical value of the measurement depth of field, so that optical zoom is realized, exactly by changing
Darkening lens set structure changes lens focus.
In the embodiment of the present application, the dual camera includes a main camera, and a secondary camera, can be wide-angle+length
Burnt combining form.
Wherein, main camera and secondary camera use identical time for exposure AE and focal distance AF to be directed to same target
Region is shot, to obtain the first image of main camera shooting and the second image of secondary camera shooting.
It should be noted that the first image of main camera shooting can be a frame, or multiframe.When for multiframe
When, the time for exposure of this first image of multiframe and focal distance are all the same.That is, in the moment that photographing command issues,
Main take the photograph keeps time for exposure t, focal length constant, continuously takes N frame, time-consuming N*t.And the second image of secondary camera shooting is understood that
To only include a frame, the moment which can issue in photographing command is shot, i.e., same with the first image of first frame
When shoot.It is of course also possible to which any moment in shooting the first image of multiframe shoots the second image.
Optionally, for the first image of multiframe captured by main camera, multiframe noise reduction (Multi Frame can be used
Noise Reduce, MFNR) it is handled, so as to the clarity for optimizing noise, promoting the first image.
So-called MFNR, be exactly under night scene or half-light environment, camera trip to imaging when can acquire multiframe
Photo or image find the different pixels with noise property under different frame numbers, by obtaining after weighting synthesis
One more clean, pure night scene or half-light photo.Generally, exactly mobile phone is in shooting night scene or half-light environment
When, it will do it the noise quantity of multiple frame numbers and the calculating and screening of position, by the frame for the place not noise for having noise
Number replacement position just obtains a clean photo by weighting, replacing repeatedly.
S102 is based on the first image and second image, obtains the initial depth of view information of the first image;
When the first image only includes a frame, initial depth of view information can be calculated based on first image and the second image.
When the first image includes multiframe, the first image of any frame (such as first frame) can be chosen, or using at MFNR
The first image after reason, and the second image is combined, calculate initial depth of view information.
Certainly, no matter the first image includes a frame or multiframe, a frame captured by the moment issued with photographing command the
Subject to one image and second image of frame, initial depth of view information is calculated.
Wherein, depth of view information just refers in captured the first image or the second image between each object and camera
Distance.Since the first image is consistent with shooting AE, AF of the second image, then rapid images match can be carried out, and according to mark
The phase difference of fixed number evidence and same object on different images (the first image and the second image), to calculate initial depth of field letter
Breath.
For example, the right is obtained initial depth map as shown in figure 3, the left side is captured original image.For initial scape
Deep figure, can be partitioned into main body (personage region), obtain main body distance (personage's distance is about 1.5m) and the full figure depth of field across
Degree (1.5m to infinity, wherein it is regarded as infinity more than maximum focal length, and the maximum focal length of this figure is 4m).Therefore,
The depth of field distribution of entire picture is concentrated mainly on 1.5m and 4m, and main body concentrates on 1.5m at, background largely at infinity, centre
Gentle transition.
Depth map can be partitioned into main body at this time, but lower section leg and ground distance are close, it is easy to obscure, and the depth of field is believed
It ceases second-rate.
S103 adjusts the focal distance of the secondary camera based on the initial depth of view information, obtains the secondary camera
Respectively for the multiframe third image of target area shooting under multiple focal distances after the adjustment;
In a kind of feasible implementation, the initial depth of view information includes the target focal distance of the first image
L obtains the target focal distance (such as 2L) and second of the first presupposition multiple if the focusing speed of secondary camera is slower
The target focal distance (such as 0.5L) of presupposition multiple, then the target focal distance of first presupposition multiple with
And the focal distance of default frame number is inserted between the target focal distance of second presupposition multiple, to obtain N*t/t0
(if 3.5, then number of pictures is 3) a focal distance for rounding.
Wherein, in the mesh of the target focal distance of first presupposition multiple and second presupposition multiple
The focal distance for being inserted into default frame number between focal distance is marked, next in the past can be sent with average interpolation, it can also be loose at first but afterwards tight.
For example, be inserted into 4 frames between 2L and 0.5L, then corresponding focal distance can be 1.7L, 1.4L, 1.1L, 0.8L,
As shown in fig. 4 a;It may be 1.9L, 1.7L, 1.4L and 1L, as shown in Figure 4 b;Can also for 1.5L, 1.1L, 0.8L,
0.6L, as illustrated in fig. 4 c.
In another feasible implementation, the initial depth of view information includes the maximal field depth distance and the minimum depth of field
Distance, if the focusing speed of secondary camera is very fast, by the depth of field between the maximal field depth distance and the minimum depth of field distance
Image is divided into the paragraph of predetermined number, with the midpoint of secondary camera each paragraph into the paragraph of the predetermined number it
Between distance be that secondary camera multiple focal distances adjusted obtain N*t/t0 focal distance.
For example, as shown in figure 5, the minimum depth of field, then will be between 1.5m~4m apart from being 1.5m if the maximal field depth distance is 4m
Depth map be divided into multiple (such as N*t/t0=5 paragraphs), and take each paragraph centre be respectively 1.75m, 2.25m,
2.75m, 3.25m and 3.75m are focal distance.
Wherein, focusing speed is determined by the performance of itself of secondary camera.
Optionally, it is also necessary to keep the overall brightness of secondary camera constant, it is (i.e. every to spend the t0 time to t0 to reduce the time for exposure
Shoot a frame image), to obtain more multiple image.
Adjustment for the time for exposure reduces to original (time for exposure of the first image) 50% in general, outdoor, it is indoor and
Dark place reduces to original 75%.It specifically can freely be arranged by hardware performance is different, be not especially limited herein.
Wherein, lightness=light-inletting quantity+hardware sensitivity+later period gain, light-inletting quantity depend on time for exposure and light
Size is enclosed, and since aperture size and hardware sensitivity are fixed value, therefore, it is necessary to pass through control later period gain and exposure
Time guarantees lightness.When reducing between upon exposure, then need to increase later period gain.If the time for exposure is original
50%, then gain needs to be increased to original 2 times.
In the specific implementation, the exposure time of secondary camera is adjusted to, motor is then moved respectively to these focusings
Alignment target region is shot at distance, to obtain N*t/t0 frame third image.
S104 obtains the target depth of view information of the first image based on the multiframe third image, is based on the target
Depth of view information carries out virtualization processing to the first image.
Edge extracting processing is carried out to the multiframe third image respectively using Boundary extracting algorithm, then according to being extracted
Edge feature every frame third image is split, then calculate separately the sharpness value sharp_ of every frame third image each region
i.When the sharp_i maximum in some region in a frame third image, that is, think at this time to focus to the region, it can by motor data
To obtain the distance between motor position material object corresponding with the region, by this method available each region of whole sub-picture away from
From information to get target depth of view information is arrived, depth of view information at this time is more acurrate, therefore, can be according to the depth of view information to the first figure
Prospect or background as in carry out more accurate virtualization processing.
It should be noted that whole sub-picture here can be any image in third image, or the first figure
Picture.Since two cameras are for captured by same target area, the picture material for being included is identical, i.e. the scape of third image
Deeply convince that breath is identical as the depth of view information of the first image.
Wherein, the Boundary extracting algorithm can be adaptive edge extraction, gradient operator etc..
There are two types of modes of operation for edge extracting: first is that directly extracting in the spatial domain, another way is first to figure
As doing a transformation, edge is extracted in its transform domain.Adaptive edge extraction be exactly processing with analytic process image border in,
According to processing data data characteristics adjust automatically processing method, processing sequence, processing parameter, boundary condition or constraint condition,
It is adapted statistical distribution, the feature structure feature of itself and handled data, to obtain optimal extraction effect.
A kind of gradient operator is Sobel operator, and there are two Sobel operators, and one is detection level edge, the other is
Detect vertical edge.Influence of the Sobel operator for the position of pixel weights, and can reduce edge blurry degree.
In the embodiment of the present application, main camera is shot simultaneously for target area in the dual camera by obtaining terminal
The first image and the second image, be based on the first image and second image, obtain the first image just
Beginning depth of view information is then based on the initial depth of view information and adjusts the focal distance of the secondary camera, then obtains the pair and take the photograph
As the multiframe third image shot under the multiple focal distances of head respectively after the adjustment for the target area, it is based on this multiframe
Third image calculates the target depth of view information of the first image, is finally carried out according to target depth of view information to the first image
Virtualization processing.Initial depth of view information is obtained by the first frame image that two cameras are shot, to take the photograph the focal distance of shooting to pair
It is adjusted, is calculated more so as to control secondary camera shoot multi-frame images near the corresponding main body of initial depth of view information
Accurate depth of view information, and then the accuracy of image virtualization can be improved.
Fig. 6 is referred to, is a kind of flow diagram of image weakening method provided by the embodiments of the present application.The present embodiment with
Image weakening method is applied to illustrate in user terminal.The image weakening method may comprise steps of:
S201 obtains the first image that main camera in the dual camera of terminal is directed to target area shooting, and obtains same
Secondary camera is directed to the second image of target area shooting in dual camera described in one moment;
For details, reference can be made to S101, and details are not described herein again.
S202 obtains the phase difference of the first image and second image, the according to the phase difference calculating
The initial depth of view information of one image;
Specifically, detect the first image and the corresponding characteristic point of the second image respectively, to the first image and the second image into
Row Feature Points Matching obtains the parallax of corresponding each characteristic point in the first image and the second image according to Feature Points Matching result
Data (phase difference), the spacing and focal length of parallax data and dual camera further according to each characteristic point, obtain each characteristic point
Depth information.
It wherein, can be according to rapid robust feature (Speeded Up Robust Features, SURF) algorithm to characteristic point
The detection of character pair point is carried out, and generates description for each characteristic point, obtains Feature Points Matching set.Then basis
RANSAC (Random Sample Consensus, random sampling consistency) algorithm is filtered Feature Points Matching set, obtains
To good Feature Points Matching set.
Further, gathered by the final matching of the first image and the second image, obtain the parallax numbers of character pair point
According to, and the depth information of each characteristic point is obtained according to parallax data, each characteristic point depth information is obtained according to parallax data and is calculated
Schematic diagram is as shown in Figure 7, wherein O and O ' is dual camera, depth information calculation formula are as follows: z=Bf/ (xl-xr)
Wherein, translation distance of the B between dual camera, is fixed value, i.e. Bl+Br, f are the focal length of dual camera, xl
It is the distance of the subpoint of 3D point and dual camera center on the image plane in scene with xr, xl-xr is parallax data,
Z is expressed as the corresponding depth information of characteristic point.
S203 obtains corresponding first exposure time of first image of multiframe when the first image includes multiframe
And first shooting frame number;
Assuming that corresponding first exposure time of the first image is t, shooting frame number is N.The exposure time of this first image of N frame
It is all the same.
First exposure time is adjusted to the second exposure time by S204, based on first exposure time, described the
One shooting frame number and second exposure time, obtain the second shooting frame number.
Specifically, calculate the product of first exposure time and the first shooting frame number, by the product with it is described
The quotient of second exposure time shoots frame number as the second of the third image.
Wherein, the second exposure time is t0, and t0 < t.Then the second shooting frame number N1=N*t/t0.
For the adjustment mode of the second exposure time t0, for details, reference can be made to S103, and details are not described herein again.
S205 adjusts the focal distance of the secondary camera based on the initial depth of view information;
For main body focus point, focusing motor is defined as 0, and when opposite prospect, position is-n, and position is+n when to background,
The multiple image shot is needed for secondary camera, then needs that motor is moved to-n respectively ... ..., -3, -2, -1,0 ,+1 ,+
At multiple focusing positions such as 2 ,+3 ... ... ,+n (focal distance).
As shown in figure 8, focus point is prospect, the distance (△ of focus point again backward to the distance between camera (△ L1)
It L2) is background.
Wherein the set-up mode of focusing position includes two kinds of situations: secondary camera focusing speed is fast and secondary camera focusing
Speed is slow, and for details, reference can be made to S103, and details are not described herein again.
S206 obtains the secondary camera respectively in second exposure time and multiple focal distances adjusted
Under, for the third image of the second shooting frame number of target area shooting.
Specifically, the second exposure time of secondary camera is adjusted to t0, then successively the motor of prismatic pair camera is extremely
At the above-mentioned each focal distance being calculated, one position of every movement then shoots a frame third image, to shoot N1 frame
Three images.
For example, as shown in figure 9, shooting obtains the third image of different clarity after the focal distance of the secondary camera of control.
Wherein, in the 1st image, A clarity is higher, and in the 2nd image, B clarity is higher, and in the 3rd image, C clarity is higher.
S207 carries out edge extracting processing to the multiframe third image respectively using gradient algorithm, obtains the multiframe
Each image-region of third image;
Wherein, the Boundary extracting algorithm can be adaptive edge extraction, gradient operator etc..
There are two types of modes of operation for adaptive edge extraction: first is that directly extracting in the spatial domain, another way is
A transformation first is done to image, extracts edge in its transform domain.Adaptive edge extraction is exactly in processing and analytic process image
In edge, according to the processing data characteristics adjust automatically processing method of data, processing sequence, processing parameter, boundary condition or about
Beam condition is adapted statistical distribution, the feature structure feature of itself and handled data, to obtain optimal extraction effect.
A kind of gradient operator is Sobel operator, and there are two Sobel operators, and one is detection level edge, the other is
Detect vertical edge.Influence of the Sobel operator for the position of pixel weights, and can reduce edge blurry degree.
It can be by every frame third image segmentation for multiple regions, due to every frame specifically, gradient algorithm is used to extract edge
Image focal distance is different, therefore there are certain differences between the region divided.For example, for 3 images in Fig. 9,
It is not identical to divide the obtained region A, B, C.
S208 calculates the sharpness value in each region in the multiframe third image, is obtained according to the sharpness value in each region
The target depth of view information of the first image carries out virtualization processing to the first image based on the target depth of view information.
Calculate separately the sharpness value sharp_i of every frame third image each region.When some region in a frame third image
Sharp_i maximum when, that is, think at this time to the region focus, it is corresponding with the region by the available motor position of motor data
The distance between material object, the range information in available each region of whole sub-picture is by this method to get arriving target depth of view information,
For example, it is 20 that A, which is the 100, the 2nd only the 50, the 3rd in the acutance of the 1st figure, it is therefore contemplated that the 1st as in Fig. 9
It opens figure to focus to A, then can calculate the distance between A and motor according to first figure is accurate distance in kind.According to identical
Mode can then respectively obtain the distance in kind of B, C respectively between motor.It is obtaining between each personage and background and motor
Distance after to get having arrived accurately target depth of view information.Therefore, can according to the depth of view information in the first image prospect or
Person's background carries out more accurate virtualization processing.
Optionally, it can determine unique object distance by the image that every two different locations are shot, then pass through multiple figures
Distance as that can determine each object in target area respectively.
As shown in Figure 10 a, for same target shooting area, left figure is shooting at close range (i.e. camera lens and actual photographed pair
As being closer), right figure is wide-long shot, the picture of different distance, may sensor as position, therefore, pass through
The image of two frame different locations shooting can confirm the practical object distance of unique object.Its corresponding effect picture as shown in fig. lob,
Corresponding schematic diagram is as shown in figure l0c.
In Figure 10 c, motor focal length L1, L2 be it is known, (single pixel size can be calculated in sensor image width P1, P2
It is known that imaging accounts for known to number of pixels), focus to center sensor is vertical, then the intersection point of two bevel edges is unique, i.e., it is black
Color straight line W is unique, can calculate L1 and L2 in figure, and object distance is exactly L1+F1 or L2+F2.Wherein, W/L2=P2/F2, W/L1=
P1/F1, L1+F1=L2+F2.
In the embodiment of the present application, main camera is shot simultaneously for target area in the dual camera by obtaining terminal
The first image and the second image, be based on the first image and second image, obtain the first image just
Beginning depth of view information is then based on the initial depth of view information and adjusts the focal distance of the secondary camera, then obtains the pair and take the photograph
As the multiframe third image shot under the multiple focal distances of head respectively after the adjustment for the target area, it is based on this multiframe
The sharpness value of each region of third image calculates the target depth of view information of the first image, finally according to target depth of view information
Virtualization processing is carried out to the first image.Initial depth of view information is obtained by the first frame image that two cameras are shot, with right
The focal distance that pair takes the photograph shooting is adjusted, and is clapped so as to control secondary camera near the corresponding main body of initial depth of view information
Multiple image is taken the photograph, more accurate depth of view information is calculated according to the sharpness value of this multiple image, and then image virtualization can be improved
Accuracy.
Following is the application Installation practice, can be used for executing the application embodiment of the method.It is real for the application device
Undisclosed details in example is applied, the application embodiment of the method is please referred to.
Referring to Figure 11, it illustrates the structural representations for the image virtualization device that one exemplary embodiment of the application provides
Figure.The image virtualization device can by software, hardware or both be implemented in combination with as terminal all or part of.It should
Device 1 includes image collection module 10, initial depth of field acquisition module 20, Focussing module 30 and image blurring module 40.
Image collection module 10, main camera is directed to the first of target area shooting in the dual camera for obtaining terminal
Image, and obtain the second image that secondary camera in dual camera described in synchronization is directed to target area shooting;
The initial depth of field obtains module 20, for being based on the first image and second image, obtains described first
The initial depth of view information of image;
Focussing module 30 is obtained for being adjusted the focal distance of the secondary camera based on the initial depth of view information
Take the multiframe third image under the multiple focal distances of the secondary camera respectively after the adjustment for target area shooting;
Image blurring module 40, the target depth of field for being obtained the first image based on the multiframe third image are believed
Breath, carries out virtualization processing to the first image based on the target depth of view information.
Optionally, as shown in figure 12, when the first image includes multiframe, further includes:
Data obtaining module 50, for obtaining corresponding first exposure time of first image of multiframe and the first shooting
Frame number;
Frame number obtains module 60, for first exposure time to be adjusted to the second exposure time, is based on described first
Exposure time, the first shooting frame number and second exposure time obtain the second shooting frame number of the third image.
Optionally, the frame number obtains module 60, is specifically used for:
The product for calculating first exposure time and the first shooting frame number, by the product and second exposure
The quotient of duration shoots frame number as the second of the third image.
Optionally, the Focussing module 30, is specifically used for:
The secondary camera is obtained respectively under second exposure time and multiple focal distances adjusted, for
The third image of second shooting frame number of the target area shooting.
Optionally, as shown in figure 13, the initial depth of view information includes the target focal distance of the first image, described
Focussing module 30, comprising:
Focal length acquiring unit 301, default times of the target focal distance and second for obtaining the first presupposition multiple
Several target focal distances;
Focal length is inserted into unit 302, in the target focal distance of first presupposition multiple and described second
The focal distance of default frame number is inserted between the target focal distance of presupposition multiple, the default frame number is the second count
Take the photograph the difference of frame number and 2;
Focal length setting unit 303, for by the target focal distance of first presupposition multiple, be inserted into it is default
The focal distance of the focal distance of frame number and second presupposition multiple, it is successively adjusted as the secondary camera
Multiple focal distances.
Optionally, as shown in figure 14, the initial depth of view information includes the maximal field depth distance and minimum depth of field distance, institute
State Focussing module 30, comprising:
Image segmentation unit 304, for the depth map by the maximal field depth distance and the minimum depth of field between
Paragraph as being divided into predetermined number, the predetermined number are equal to the second shooting frame number;
Focal length setting unit 305, for the secondary camera into the paragraph of the predetermined number in each paragraph
The distance between point is the secondary camera multiple focal distances adjusted.
Optionally, the initial depth of field obtains module 20, is specifically used for:
The phase difference for obtaining the first image and second image, according to the phase difference calculating the first image
Initial depth of view information.
Optionally, as shown in figure 15, described image blurring module 40, comprising:
Area acquisition unit 401, for being carried out at edge extracting to the multiframe third image respectively using gradient algorithm
Reason, obtains each image-region of the multiframe third image;
Target depth of field acquiring unit 402, for calculating the sharpness value in each region in the multiframe third image, according to described
The sharpness value in each region obtains the target depth of view information of the first image.
It should be noted that image provided by the above embodiment virtualization device is when executing image weakening method, only more than
The division progress of each functional module is stated for example, can according to need and in practical application by above-mentioned function distribution by difference
Functional module complete, i.e., the internal structure of equipment is divided into different functional modules, with complete it is described above whole or
Person's partial function.In addition, image virtualization device provided by the above embodiment and image weakening method embodiment belong to same design,
It embodies realization process and is detailed in embodiment of the method, and which is not described herein again.
Above-mentioned the embodiment of the present application serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
In the embodiment of the present application, main camera is shot simultaneously for target area in the dual camera by obtaining terminal
The first image and the second image, be based on the first image and second image, obtain the first image just
Beginning depth of view information is then based on the initial depth of view information and adjusts the focal distance of the secondary camera, then obtains the pair and take the photograph
As the multiframe third image shot under the multiple focal distances of head respectively after the adjustment for the target area, it is based on this multiframe
The sharpness value of each region of third image calculates the target depth of view information of the first image, finally according to target depth of view information
Virtualization processing is carried out to the first image.Initial depth of view information is obtained by the first frame image that two cameras are shot, with right
The focal distance that pair takes the photograph shooting is adjusted, and is clapped so as to control secondary camera near the corresponding main body of initial depth of view information
Multiple image is taken the photograph, more accurate depth of view information is calculated according to the sharpness value of this multiple image, and then image virtualization can be improved
Accuracy.
The embodiment of the present application also provides a kind of computer storage medium, the computer storage medium can store more
Item instruction, described instruction are suitable for being loaded by processor and being executed the method and step such as above-mentioned Fig. 2-embodiment illustrated in fig. 10, specifically
Implementation procedure may refer to Fig. 2-embodiment illustrated in fig. 10 and illustrate, herein without repeating.
Referring to Figure 16, the structural schematic diagram of a kind of electronic equipment is provided for the embodiment of the present application.As shown in figure 16, institute
Stating electronic equipment 1000 may include: at least one processor 1001, at least one network interface 1004, user interface 1003,
Memory 1005, at least one communication bus 1002.
Wherein, communication bus 1002 is for realizing the connection communication between these components.
Wherein, user interface 1003 may include display screen (Display), camera (Camera), optional user interface
1003 can also include standard wireline interface and wireless interface.
Wherein, network interface 1004 optionally may include standard wireline interface and wireless interface (such as WI-FI interface).
Wherein, processor 1001 may include one or more processing core.Processor 1001 using it is various excuse and
Various pieces in the entire electronic equipment 1000 of connection, by run or execute the instruction being stored in memory 1005,
Program, code set or instruction set, and the data being stored in memory 1005 are called, execute the various function of electronic equipment 1000
It can and handle data.Optionally, processor 1001 can using Digital Signal Processing (Digital Signal Processing,
DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array
At least one of (Programmable Logic Array, PLA) example, in hardware is realized.Processor 1001 can integrating central
Processor (Central Processing Unit, CPU), image processor (Graphics Processing Unit, GPU)
With the combination of one or more of modem etc..Wherein, the main processing operation system of CPU, user interface and apply journey
Sequence etc.;GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen;Modem is for handling channel radio
Letter.It is understood that above-mentioned modem can not also be integrated into processor 1001, carried out separately through chip piece
It realizes.
Wherein, memory 1005 may include random access memory (Random Access Memory, RAM), also can wrap
Include read-only memory (Read-Only Memory).Optionally, which includes non-transient computer-readable medium
(non-transitory computer-readable storage medium).Memory 1005 can be used for store instruction, journey
Sequence, code, code set or instruction set.Memory 1005 may include storing program area and storage data area, wherein storing program area
Can store the instruction for realizing operating system, the instruction at least one function (such as touch function, sound play function
Energy, image player function etc.), for realizing instruction of above-mentioned each embodiment of the method etc.;Storage data area can store each above
The data etc. being related in a embodiment of the method.Memory 1005 optionally can also be that at least one is located remotely from aforementioned processing
The storage device of device 1001.As shown in figure 16, as may include in a kind of memory 1005 of computer storage medium operation
System, network communication module, Subscriber Interface Module SIM and image blur application program.
In the electronic equipment 1000 shown in Figure 16, user interface 1003 is mainly used for providing the interface of input for user,
Obtain the data of user's input;And processor 1001 can be used for calling the image stored in memory 1005 virtualization using journey
Sequence, and specifically execute following operation:
It obtains main camera in the dual camera of terminal and is directed to the first image of target area shooting, and obtain synchronization
Secondary camera is directed to the second image of target area shooting in the dual camera;
Based on the first image and second image, the initial depth of view information of the first image is obtained;
The focal distance that the secondary camera is adjusted based on the initial depth of view information is obtained the secondary camera and existed respectively
For the multiframe third image of target area shooting under multiple focal distances adjusted;
The target depth of view information of the first image is obtained based on the multiframe third image, is believed based on the target depth of field
Breath carries out virtualization processing to the first image.
In one embodiment, when the first image includes multiframe, the processor 1001 also executes following operation:
Obtain corresponding first exposure time of first image of multiframe and the first shooting frame number;
First exposure time is adjusted to the second exposure time, is based on first exposure time, the first count
Frame number and second exposure time are taken the photograph, the second shooting frame number of the third image is obtained.
In one embodiment, the processor 1001 is being executed based on first exposure time, first shooting
Frame number and second exposure time, specific to execute following operation when obtaining the second shooting frame number of the third image:
The product for calculating first exposure time and the first shooting frame number, by the product and second exposure
The quotient of duration shoots frame number as the second of the third image.
In one embodiment, the processor 1001 obtains the secondary camera after the adjustment multiple respectively executing
It is specific to execute following operation when under focal distance for the multiframe third image of target area shooting:
The secondary camera is obtained respectively under second exposure time and multiple focal distances adjusted, for
The third image of second shooting frame number of the target area shooting.
In one embodiment, the initial depth of view information includes the target focal distance of the first image, the place
Reason device 1001 is specific to execute following grasp when execution adjusts the focal distance of the secondary camera based on the initial depth of view information
Make:
Obtain the target focal distance of the first presupposition multiple and the target focal distance of the second presupposition multiple;
In the target focal distance of first presupposition multiple and the target pair of second presupposition multiple
The focal distance of default frame number is inserted between defocus distance, the default frame number is the difference of the second shooting frame number and 2;
By the target focal distance of first presupposition multiple, the focal distance for the default frame number being inserted into and institute
The focal distance of the second presupposition multiple is stated, successively as the secondary camera multiple focal distances adjusted.
In one embodiment, the initial depth of view information includes the maximal field depth distance and minimum depth of field distance, described
For processor 1001 when execution adjusts the focal distance of the secondary camera based on the initial depth of view information, specific execution is following
Operation:
Depth image between the maximal field depth distance and the minimum depth of field distance is divided into the section of predetermined number
It falls, the predetermined number is equal to the second shooting frame number;
With the secondary camera into the paragraph of the predetermined number the distance between midpoint of each paragraph for the pair
Camera multiple focal distances adjusted.
In one embodiment, the processor 1001 is being executed based on the first image and second image,
It is specific to execute following operation when obtaining the initial depth of view information of the first image:
The phase difference for obtaining the first image and second image, according to the phase difference calculating the first image
Initial depth of view information.
In one embodiment, the processor 1001 obtains first figure based on the multiframe third image in execution
It is specific to execute following operation when the target depth of view information of picture:
Edge extracting processing is carried out to the multiframe third image respectively using gradient algorithm, obtains the multiframe third figure
Each image-region of picture;
The sharpness value for calculating each region in the multiframe third image obtains described according to the sharpness value in each region
The target depth of view information of one image.
In the embodiment of the present application, main camera is shot simultaneously for target area in the dual camera by obtaining terminal
The first image and the second image, be based on the first image and second image, obtain the first image just
Beginning depth of view information is then based on the initial depth of view information and adjusts the focal distance of the secondary camera, then obtains the pair and take the photograph
As the multiframe third image shot under the multiple focal distances of head respectively after the adjustment for the target area, it is based on this multiframe
The sharpness value of each region of third image calculates the target depth of view information of the first image, finally according to target depth of view information
Virtualization processing is carried out to the first image.Initial depth of view information is obtained by the first frame image that two cameras are shot, with right
The focal distance that pair takes the photograph shooting is adjusted, and is clapped so as to control secondary camera near the corresponding main body of initial depth of view information
Multiple image is taken the photograph, more accurate depth of view information is calculated according to the sharpness value of this multiple image, and then image virtualization can be improved
Accuracy.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer-readable storage medium
In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory or random access memory etc..
Above disclosed is only the application preferred embodiment, cannot limit the right model of the application with this certainly
It encloses, therefore according to equivalent variations made by the claim of this application, still belongs to the range that the application is covered.
Claims (18)
1. a kind of image weakening method, which is characterized in that the described method includes:
It obtains main camera in the dual camera of terminal and is directed to the first image of target area shooting, and obtain described in synchronization
Secondary camera is directed to the second image of target area shooting in dual camera;
Based on the first image and second image, the initial depth of view information of the first image is obtained;
The focal distance that the secondary camera is adjusted based on the initial depth of view information is obtained the secondary camera and adjusted respectively
For the multiframe third image of target area shooting under multiple focal distances afterwards;
The target depth of view information of the first image is obtained based on the multiframe third image, is based on the target depth of view information pair
The first image carries out virtualization processing.
2. the method according to claim 1, wherein when the first image includes multiframe, further includes:
Obtain corresponding first exposure time of first image of multiframe and the first shooting frame number;
First exposure time is adjusted to the second exposure time, is based on first exposure time, first photographed frame
Several and described second exposure time obtains the second shooting frame number of the third image.
3. according to the method described in claim 2, it is characterized in that, described be based on first exposure time, the first count
Frame number and second exposure time are taken the photograph, the second shooting frame number of the third image is obtained, comprising:
The product for calculating first exposure time and the first shooting frame number, by the product and second exposure time
Quotient as the third image second shoot frame number.
4. according to the method described in claim 2, it is characterized in that, described obtain the secondary camera after the adjustment more respectively
For the multiframe third image of target area shooting under a focal distance, comprising:
The secondary camera is obtained respectively under second exposure time and multiple focal distances adjusted, for described
The third image of second shooting frame number of target area shooting.
5. according to the method described in claim 2, it is characterized in that, the initial depth of view information includes the mesh of the first image
Mark focal distance, the focal distance that the secondary camera is adjusted based on the initial depth of view information, comprising:
Obtain the target focal distance of the first presupposition multiple and the target focal distance of the second presupposition multiple;
In the target focusing of the target focal distance and second presupposition multiple of first presupposition multiple
The focal distance of default frame number is inserted between, the default frame number is the difference of the second shooting frame number and 2;
By the target focal distance of first presupposition multiple, the focal distance for the default frame number being inserted into and described
The focal distance of two presupposition multiples, successively as the secondary camera multiple focal distances adjusted.
6. according to the method described in claim 2, it is characterized in that, the initial depth of view information include the maximal field depth distance and
Minimum depth of field distance, the focal distance that the secondary camera is adjusted based on the initial depth of view information, comprising:
Depth image between the maximal field depth distance and the minimum depth of field distance is divided into the paragraph of predetermined number, institute
It states predetermined number and is equal to the second shooting frame number;
It is the secondary camera shooting with described secondary camera the distance between midpoint of each paragraph into the paragraph of the predetermined number
Multiple focal distances adjusted.
7. the method according to claim 1, wherein described be based on the first image and second figure
Picture obtains the initial depth of view information of the first image, comprising:
The phase difference for obtaining the first image and second image, according to the first of the phase difference calculating the first image
Beginning depth of view information.
8. the method according to claim 1, wherein described obtain described first based on the multiframe third image
The target depth of view information of image, comprising:
Edge extracting processing is carried out to the multiframe third image respectively using gradient algorithm, obtains the multiframe third image
Each image-region;
The sharpness value for calculating each region in the multiframe third image obtains first figure according to the sharpness value in each region
The target depth of view information of picture.
9. a kind of image blurs device, which is characterized in that described device includes:
Image collection module, main camera is directed to the first image of target area shooting in the dual camera for obtaining terminal,
And obtain the second image that secondary camera in dual camera described in synchronization is directed to target area shooting;
The initial depth of field obtains module, for being based on the first image and second image, obtains the first image
Initial depth of view information;
Focussing module, for adjusting the focal distance of the secondary camera based on the initial depth of view information, described in acquisition
For the multiframe third image of target area shooting under the multiple focal distances of secondary camera respectively after the adjustment;
Image blurring module is based on for obtaining the target depth of view information of the first image based on the multiframe third image
The target depth of view information carries out virtualization processing to the first image.
10. device according to claim 9, which is characterized in that when the first image includes multiframe, further includes:
Data obtaining module, for obtaining corresponding first exposure time of first image of multiframe and the first shooting frame number;
Frame number obtains module, for first exposure time to be adjusted to the second exposure time, when based on the described first exposure
Long, the first shooting frame number and second exposure time obtain the second shooting frame number of the third image.
11. device according to claim 10, which is characterized in that the frame number obtains module, is specifically used for:
The product for calculating first exposure time and the first shooting frame number, by the product and second exposure time
Quotient as the third image second shoot frame number.
12. device according to claim 10, which is characterized in that the Focussing module is specifically used for:
The secondary camera is obtained respectively under second exposure time and multiple focal distances adjusted, for described
The third image of second shooting frame number of target area shooting.
13. device according to claim 10, which is characterized in that the initial depth of view information includes the first image
Target focal distance, the Focussing module, comprising:
Focal length acquiring unit, for obtaining described in the target focal distance and the second presupposition multiple of the first presupposition multiple
Target focal distance;
Focal length is inserted into unit, for the target focal distance and second presupposition multiple in first presupposition multiple
The target focal distance between be inserted into the focal distance of default frame number, the default frame number be the second shooting frame number with
2 difference;
Focal length setting unit, for by the target focal distance of first presupposition multiple, the default frame number that is inserted into
The focal distance of focal distance and second presupposition multiple is successively adjusted multiple right as the secondary camera
Defocus distance.
14. device according to claim 10, which is characterized in that the initial depth of view information include the maximal field depth distance with
And minimum depth of field distance, the Focussing module, comprising:
Image segmentation unit, for the depth image between the maximal field depth distance and the minimum depth of field distance to be divided into
The paragraph of predetermined number, the predetermined number are equal to the second shooting frame number;
Focal length setting unit, for the secondary camera into the paragraph of the predetermined number between the midpoint of each paragraph
Distance is the secondary camera multiple focal distances adjusted.
15. device according to claim 9, which is characterized in that the initial depth of field obtains module, is specifically used for:
The phase difference for obtaining the first image and second image, according to the first of the phase difference calculating the first image
Beginning depth of view information.
16. device according to claim 9, which is characterized in that described image blurring module, comprising:
Area acquisition unit is obtained for carrying out edge extracting processing to the multiframe third image respectively using gradient algorithm
Each image-region of the multiframe third image;
Target depth of field acquiring unit, for calculating the sharpness value in each region in the multiframe third image, according to each region
Sharpness value obtain the target depth of view information of the first image.
17. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with a plurality of instruction, the finger
It enables and is suitable for being loaded by processor and being executed the method and step such as claim 1~8 any one.
18. a kind of electronic equipment characterized by comprising processor and memory;Wherein, the memory is stored with calculating
Machine program, the computer program are suitable for being loaded by the processor and being executed the method step such as claim 1~8 any one
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910603273.8A CN110324532B (en) | 2019-07-05 | 2019-07-05 | Image blurring method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910603273.8A CN110324532B (en) | 2019-07-05 | 2019-07-05 | Image blurring method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110324532A true CN110324532A (en) | 2019-10-11 |
CN110324532B CN110324532B (en) | 2021-06-18 |
Family
ID=68122756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910603273.8A Active CN110324532B (en) | 2019-07-05 | 2019-07-05 | Image blurring method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110324532B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112712731A (en) * | 2020-12-21 | 2021-04-27 | 北京百度网讯科技有限公司 | Image processing method, device and system, road side equipment and cloud control platform |
CN112991245A (en) * | 2021-02-03 | 2021-06-18 | 无锡闻泰信息技术有限公司 | Double-shot blurring processing method and device, electronic equipment and readable storage medium |
CN113012211A (en) * | 2021-03-30 | 2021-06-22 | 杭州海康机器人技术有限公司 | Image acquisition method, device, system, computer equipment and storage medium |
CN113438388A (en) * | 2021-07-06 | 2021-09-24 | Oppo广东移动通信有限公司 | Processing method, camera assembly, electronic device, processing device and medium |
CN113473012A (en) * | 2021-06-30 | 2021-10-01 | 维沃移动通信(杭州)有限公司 | Virtualization processing method and device and electronic equipment |
CN113570650A (en) * | 2020-04-28 | 2021-10-29 | 合肥美亚光电技术股份有限公司 | Depth of field judgment method and device, electronic equipment and storage medium |
CN113852752A (en) * | 2020-06-28 | 2021-12-28 | 北京小米移动软件有限公司 | Photograph shooting method, photograph shooting device and storage medium |
CN115134532A (en) * | 2022-07-26 | 2022-09-30 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101764925A (en) * | 2008-12-25 | 2010-06-30 | 华晶科技股份有限公司 | Simulation method for shallow field depth of digital image |
US20140232928A1 (en) * | 2011-10-28 | 2014-08-21 | Fujifilm Corporation | Imaging method |
CN104660900A (en) * | 2013-10-30 | 2015-05-27 | 株式会社摩如富 | Image Processing Device, Image Processing Method And Recording Medium |
CN104796621A (en) * | 2014-01-20 | 2015-07-22 | 奥林巴斯株式会社 | Imaging device and imaging method |
CN105657394A (en) * | 2014-11-14 | 2016-06-08 | 东莞宇龙通信科技有限公司 | Photographing method based on double cameras, photographing device and mobile terminal |
CN108055452A (en) * | 2017-11-01 | 2018-05-18 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
CN108419009A (en) * | 2018-02-02 | 2018-08-17 | 成都西纬科技有限公司 | Image definition enhancing method and device |
-
2019
- 2019-07-05 CN CN201910603273.8A patent/CN110324532B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101764925A (en) * | 2008-12-25 | 2010-06-30 | 华晶科技股份有限公司 | Simulation method for shallow field depth of digital image |
US20140232928A1 (en) * | 2011-10-28 | 2014-08-21 | Fujifilm Corporation | Imaging method |
CN104660900A (en) * | 2013-10-30 | 2015-05-27 | 株式会社摩如富 | Image Processing Device, Image Processing Method And Recording Medium |
CN108107571A (en) * | 2013-10-30 | 2018-06-01 | 株式会社摩如富 | Image processing apparatus and method and non-transitory computer readable recording medium |
CN104796621A (en) * | 2014-01-20 | 2015-07-22 | 奥林巴斯株式会社 | Imaging device and imaging method |
CN105657394A (en) * | 2014-11-14 | 2016-06-08 | 东莞宇龙通信科技有限公司 | Photographing method based on double cameras, photographing device and mobile terminal |
CN108055452A (en) * | 2017-11-01 | 2018-05-18 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
CN108419009A (en) * | 2018-02-02 | 2018-08-17 | 成都西纬科技有限公司 | Image definition enhancing method and device |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113570650A (en) * | 2020-04-28 | 2021-10-29 | 合肥美亚光电技术股份有限公司 | Depth of field judgment method and device, electronic equipment and storage medium |
CN113570650B (en) * | 2020-04-28 | 2024-02-02 | 合肥美亚光电技术股份有限公司 | Depth of field judging method, device, electronic equipment and storage medium |
CN113852752A (en) * | 2020-06-28 | 2021-12-28 | 北京小米移动软件有限公司 | Photograph shooting method, photograph shooting device and storage medium |
CN113852752B (en) * | 2020-06-28 | 2024-03-12 | 北京小米移动软件有限公司 | Photo taking method, photo taking device and storage medium |
CN112712731A (en) * | 2020-12-21 | 2021-04-27 | 北京百度网讯科技有限公司 | Image processing method, device and system, road side equipment and cloud control platform |
CN112991245A (en) * | 2021-02-03 | 2021-06-18 | 无锡闻泰信息技术有限公司 | Double-shot blurring processing method and device, electronic equipment and readable storage medium |
CN112991245B (en) * | 2021-02-03 | 2024-01-19 | 无锡闻泰信息技术有限公司 | Dual-shot blurring processing method, device, electronic equipment and readable storage medium |
CN113012211A (en) * | 2021-03-30 | 2021-06-22 | 杭州海康机器人技术有限公司 | Image acquisition method, device, system, computer equipment and storage medium |
CN113473012A (en) * | 2021-06-30 | 2021-10-01 | 维沃移动通信(杭州)有限公司 | Virtualization processing method and device and electronic equipment |
CN113438388A (en) * | 2021-07-06 | 2021-09-24 | Oppo广东移动通信有限公司 | Processing method, camera assembly, electronic device, processing device and medium |
CN115134532A (en) * | 2022-07-26 | 2022-09-30 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110324532B (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110324532A (en) | A kind of image weakening method, device, storage medium and electronic equipment | |
Wadhwa et al. | Synthetic depth-of-field with a single-camera mobile phone | |
CN108898567B (en) | Image noise reduction method, device and system | |
JP7003238B2 (en) | Image processing methods, devices, and devices | |
CN111641778B (en) | Shooting method, device and equipment | |
CN108322646B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN103945118B (en) | Image weakening method, device and electronic equipment | |
KR101602394B1 (en) | Image Blur Based on 3D Depth Information | |
CN109889724B (en) | Image blurring method and device, electronic equipment and readable storage medium | |
CN111402135A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN109474780B (en) | Method and device for image processing | |
CN107948500A (en) | Image processing method and device | |
US10827107B2 (en) | Photographing method for terminal and terminal | |
CN107409166A (en) | Panning lens automatically generate | |
CN107835372A (en) | Imaging method, device, mobile terminal and storage medium based on dual camera | |
CN107948520A (en) | Image processing method and device | |
CN106233329A (en) | 3D draws generation and the use of east image | |
CN107959778A (en) | Imaging method and device based on dual camera | |
CN110225330A (en) | System and method for multiple views noise reduction and high dynamic range | |
CN108024058B (en) | Image blurs processing method, device, mobile terminal and storage medium | |
CN110546943B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN110677621B (en) | Camera calling method and device, storage medium and electronic equipment | |
CN113313661A (en) | Image fusion method and device, electronic equipment and computer readable storage medium | |
KR20160140453A (en) | Method for obtaining a refocused image from 4d raw light field data | |
CN110266954A (en) | Image processing method, device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |