CN108282616A - Processing method, device, storage medium and the electronic equipment of image - Google Patents
Processing method, device, storage medium and the electronic equipment of image Download PDFInfo
- Publication number
- CN108282616A CN108282616A CN201810097896.8A CN201810097896A CN108282616A CN 108282616 A CN108282616 A CN 108282616A CN 201810097896 A CN201810097896 A CN 201810097896A CN 108282616 A CN108282616 A CN 108282616A
- Authority
- CN
- China
- Prior art keywords
- image
- group
- frame
- numerical value
- clarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 77
- 238000011946 reduction process Methods 0.000 claims abstract description 42
- 230000001360 synchronised effect Effects 0.000 claims abstract description 30
- 238000000034 method Methods 0.000 claims abstract description 18
- 230000009467 reduction Effects 0.000 claims description 39
- 238000010606 normalization Methods 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 7
- 241000208340 Araliaceae Species 0.000 claims description 5
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 5
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 5
- 235000008434 ginseng Nutrition 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 19
- 230000000694 effects Effects 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 239000000872 buffer Substances 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 210000003128 head Anatomy 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000005452 bending Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 241000288673 Chiroptera Species 0.000 description 1
- 241001396014 Priacanthus arenatus Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
This application discloses a kind of processing method of image, device, storage medium and electronic equipments.This method is applied to terminal, which includes at least the first camera module and the second camera module, this method include:First group of image and second group of image are obtained, which seems the image acquired by first camera module, which seems the image acquired by second camera module;The first image is determined from first group of image, and determines the second image from second group of image, and first image and second image are the images of synchronous acquisition;Depth of view information is obtained according to first image and second image;Noise reduction process is carried out to first image according to first group of image, obtains target image;According to the depth of view information, default processing is carried out to the target image.The present embodiment can improve the imaging effect of image.
Description
Technical field
The application belongs to technical field of image processing more particularly to a kind of processing method of image, device, storage medium
And electronic equipment.
Background technology
With the continuous development of hardware technology, the configuration for the hardware installed in terminal is also higher and higher.Currently, many whole
End is all equipped with double camera modules.By means of double camera modules, the horizontal promotion that can be obtained by a relatively large margin of taking pictures of terminal.
For example, using double camera modules of colour imagery shot and black and white camera collocation composition, terminal can be made to be captured when taking pictures
More details.And double camera modules of two colour imagery shot collocation compositions are used, terminal can be made to possess when taking pictures double
Light-inletting quantity again etc..
Invention content
The embodiment of the present application provides a kind of processing method of image, device, storage medium and electronic equipment, can improve
The imaging effect of image.
The embodiment of the present application provides a kind of processing method of image, is applied to terminal, and the terminal is taken the photograph including at least first
Picture module and the second camera module, the method includes:
First group of image and second group of image are obtained, first group picture seems to be acquired by first camera module
Image, second group picture seem the image acquired by second camera module;
The first image is determined from first group of image, and determines the second image from second group of image,
Described first image and second image are the images of synchronous acquisition;
Depth of view information is obtained according to described first image and second image;
Noise reduction process is carried out to described first image according to first group of image, obtains target image;
According to the depth of view information, default processing is carried out to the target image.
The embodiment of the present application provides a kind of processing unit of image, is applied to terminal, and the terminal is taken the photograph including at least first
As module and the second camera module, described device include:
First acquisition module, for obtaining first group of image and second group of image, first group picture seems by described the
The image of one camera module acquisition, second group picture seems the image acquired by second camera module;
Determining module, for determining the first image from first group of image, and from second group of image really
The second image is made, described first image and second image are the images of synchronous acquisition;
Second acquisition module, for obtaining depth of view information according to described first image and second image;
Noise reduction module obtains target figure for carrying out noise reduction process to described first image according to first group of image
Picture;
Processing module, for according to the depth of view information, default processing to be carried out to the target image.
The embodiment of the present application provides a kind of storage medium, computer program is stored thereon with, when the computer program exists
When being executed on computer so that the computer executes the step in the processing method of image provided by the embodiments of the present application.
The embodiment of the present application also provides a kind of electronic equipment, including memory, processor, and the processor passes through calling
The computer program stored in the memory, the step in processing method for executing image provided by the embodiments of the present application
Suddenly.
In the present embodiment, terminal can carry out noise reduction process, obtained target figure using first group of first image of image pair
As its random noise is few.Also, the first image and the second image that terminal can be arrived according to synchronous acquisition obtain depth of view information,
Therefore the depth of view information that terminal is got is more acurrate.I.e. less in the noise of target image, the depth of view information got is more acurrate
In the case of, the present embodiment carries out the target image that background blurring effect is also more preferable, i.e., it is background blurring after target image
Its imaging effect is good.Further, since the step of carrying out multiframe noise reduction using first group of first image of image pair can with according to the
The step of one image and the second image obtain depth of view information executes parallel, therefore the present embodiment can also improve image procossing speed
Degree, effectively avoids the problem that processing speed is slow caused by multiframe noise reduction.
Description of the drawings
Below in conjunction with the accompanying drawings, it is described in detail by the specific implementation mode to the present invention, technical scheme of the present invention will be made
And advantage is apparent.
Fig. 1 is the flow diagram of the processing method of image provided by the embodiments of the present application.
Fig. 2 is another flow diagram of the processing method of image provided by the embodiments of the present application.
Fig. 3 to Fig. 5 is the scene and processing flow schematic diagram of the processing method of image provided by the embodiments of the present application.
Fig. 6 is the structural schematic diagram of the processing unit of image provided by the embodiments of the present application.
Fig. 7 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
Fig. 8 is the structural schematic diagram of electronic equipment provided by the embodiments of the present application.
Specific implementation mode
Schema is please referred to, wherein identical component symbol represents identical component, the principle of the present invention is to implement one
It is illustrated in computing environment appropriate.The following description be based on illustrated by the specific embodiment of the invention, should not be by
It is considered as the limitation present invention other specific embodiments not detailed herein.
It is understood that the executive agent of the embodiment of the present application can be smart mobile phone or tablet computer etc.
Terminal device.
Referring to Fig. 1, Fig. 1 is the flow diagram of the processing method of image provided by the embodiments of the present application.The image
Processing method can be applied to terminal.The terminal can be smart mobile phone or tablet computer etc. equipped with double camera modules
Any electronic equipment.The flow of the processing method of the image may include:
In step S101, it seems by the first camera module to obtain first group of image and second group of image, first group picture
The image of acquisition, second group picture seem the image acquired by the second camera module.
With the continuous development of hardware technology, the configuration for the hardware installed in terminal is also higher and higher.Currently, many whole
End is all equipped with double camera modules.By means of double camera modules, the horizontal promotion that can be obtained by a relatively large margin of taking pictures of terminal.
For example, using double camera modules of colour imagery shot and black and white camera collocation composition, terminal can be made to be captured when taking pictures
More details.And double camera modules of two colour imagery shot collocation compositions are used, terminal can be made to possess when taking pictures double
Light-inletting quantity again etc..However, in the related technology, the image that its processing of the terminal of the double camera modules of carrying obtains, imaging effect
It is poor.
In the step S101 of the embodiment of the present application, for example, first taking the photograph of can first obtaining in its double camera module of terminal
Second group of image that the second camera module in the first group of image and double camera modules that are acquired as module acquires.This
Include multiple image in one group of image, also includes multiple image in second group of image.
In some embodiments, after user opens camera applications and before pressing and taking pictures button, double camera shooting moulds of terminal
Group can constantly synchronous acquisition image, these the image collected be possibly stored in buffer queue.Terminal can be slow from this
It deposits and obtains image in queue, and show on a terminal screen, for user's preview.
In one embodiment, which can be fixed length queue.For example, the length of the buffer queue is 4
Element.That is, storing camera module collected 4 frame image recently in the buffer queue.For example, the first camera module corresponds to
First queue in cached its collected 4 frame image recently, cached it in the corresponding second queue of the second camera module
The 4 frame images arrived recently with the first camera module synchronous acquisition.Also, back the image collected can override front acquisition
The image arrived.For example, according to the sequence of acquisition time priority, A1, A2, A3, A4 this 4 frame image has been cached in first queue, when
When first camera module collects A5 images, terminal can be by the A1 image-erasings in first queue, and are inserted into A5 images, from
And first queue is made to become A2, A3, A4, A5.
In one embodiment, when terminal acquires image using double camera modules, the first camera module and second is taken the photograph
As module can be with synchronous acquisition image.For example, the first camera module collects A1 images, the second camera module can be synchronized and be adopted
Collect B1 images.For example, cached A1, A2, A3, A4 this 4 frame image in first queue, cached in second queue B1, B2,
This 4 frame image of B3, B4, then A1 and B1 are the images of synchronous acquisition, A2 and B2 are the images of synchronous acquisition, and A3 and B3 are same
The image of acquisition is walked, A4 and B4 are the images of synchronous acquisition.
Certainly, after user presses and takes pictures button, the camera module of terminal can also acquire image.
Therefore, in some embodiments, first group of image can be only pressed comprising user take pictures button before by first
Camera module the image collected, or only pressed comprising user take pictures button after by first camera module the image collected,
Or not only press comprising user take pictures button before by first camera module the image collected, but also include user press take pictures by
By first camera module the image collected after button.
For example, before user presses and takes pictures button, the first camera module collects A1, A2, A3, A4 this 4 frame image,
User presses take pictures button after, the first camera module collects A5, A6, A7, A8 this 4 photograph frame.So, first group of image can
To be A1, A2, A3, A4 either A2, A3, A4, A5 either A3, A4, A5, A6 or A5, A6, A7, A8 etc..In some implementations
In mode, first group of image can be the collected successive frame of the first camera, can also be discontinuous frame, as A2, A3,
A5, A6 etc..Second group of image is similarly.
In step s 102, the first image is determined from first group of image, and is determined from second group of image
Second image, first image and second image are the images of synchronous acquisition.
For example, in first group of image A1, A2, A3, A4 for getting the acquisition of the first camera module and the second camera shooting mould
Group acquisition second group of image B1, B2, B3, B4 after, terminal can determine the first image from A1, A2, A3, A4, then from
The second image is determined in B1, B2, B3, B4.Wherein, which can be the image with the first picture synchronization collection.
For example, the A2 in A1, A2, A3, A4 is determined as the first image by terminal, then terminal can be correspondingly true by B2
It is set to the second image.
In step s 103, depth of view information is obtained according to first image and second image.
For example, after determining the first image and the second image, terminal can be obtained according to first image and the second image
Take depth of view information.It is understood that since first image and the second image are double camera modules in terminal from different bats
It acts as regent and sets the image that (angle) synchronous acquisition arrives, therefore depth of view information can be obtained according to the first image and the second image.
It should be noted that the depth of view information is for the focusing object in image, it is to determine to focus
The depth of view information got after object.
In step S104, noise reduction process is carried out to first image according to first group of image, obtains target image.
For example, when first group of image is including at least two field pictures, terminal can be according at least two frames in first group of image
The first image of image pair carries out noise reduction process, to obtain target image.For example, terminal can be removed according in first group of image
Other images outside first image carry out noise reduction process to first image.For example, the first image is A2 images, then terminal
It can be using the first image A2 as the basic frame of noise reduction process, and according to this 3 frame image pair of A1, A3, A4 in first group of image
A2 carries out noise reduction process.That is, terminal can according to this 3 frame image recognition of A1, A3, A4 and reduce in basic frame A2 images with
Machine noise, to obtain the target image by noise reduction.
In some embodiments, the first image of image noise reduction algorithm pair can also be used to carry out noise reduction process, for example, figure
As noise reduction algorithm may include wavelet de-noising algorithm, smothing filtering algorithm etc..
In step S105, according to the depth of view information, default processing is carried out to the target image.
For example, after obtaining target image, terminal can carry out the target image according to the depth of view information got
Default processing.
In some embodiments, which can be background blurring and image 3D applications processing etc..
It is understood that in the present embodiment, terminal can be carried out according to first group of first image of image pair at noise reduction
Reason, obtained target image noise are few.Also, the first image and the second image that terminal can be arrived according to synchronous acquisition obtain
Depth of view information, therefore the depth of view information that terminal is got is more acurrate.Therefore, terminal according to the depth of view information to the target image
Default processing is carried out, image its imaging effect that can to obtain after processing is more preferable.
Referring to Fig. 2, Fig. 2 is another flow diagram of the processing method of image provided by the embodiments of the present application, flow
May include:
In step s 201, terminal obtains first group of image and second group of image, which seems by the first camera shooting
The image of module acquisition, which seems the image acquired by the second camera module.
For example, being equipped with double camera modules in terminal, which includes the first camera module and the second camera shooting mould
Group.Wherein, first camera module and second camera module can be with synchronous acquisition images.Comprising more in first group of image
Frame image also includes multiple image in second group of image.Terminal can be under same photographed scene using double camera modules
Multiple image of the Quick Acquisition about identical reference object.
It is described previously such as the present embodiment, for example, it is A1, A2, A3, A4 that the first camera module, which collects first group of image,
This 4 frame image.The collected second group of image of second camera module is this 4 frame image of B1, B2, B3, B4.
In step S202, terminal obtains the clarity of each frame image in first group of image.
For example, after first group of image A1, A2, A3, A4 for getting the acquisition of the first camera module, terminal can obtain
The clarity of image A1, A2, A3, A4.
For example, the value range of the numerical value of the clarity of image is 0~100, the bigger expression image of numerical value of clarity is more
Clearly.For example, the clarity of A1, A2, A3, A4 are followed successively by 80,83,81,79 in first group of image.
In step S203, if each frame image of first group of image includes face, terminal obtains first group of image
In each frame image parameter preset numerical value, the numerical value of the parameter preset is used to indicate the eyes size of face in image.
For example, after the clarity for getting image A1, A2, A3, A4, terminal detects each in first group of image
Frame image includes face, then terminal can further obtain the number of the parameter preset of each frame image in first group of image
Value.Wherein, the numerical value of the parameter preset can be used to indicate that the eyes size of face in image.
In one embodiment, terminal can be big come the eyes for obtaining face in image by some preset algorithms
Small, these algorithms can export the numerical value for indicating eyes size, can be that the bigger expression eyes of numerical value are bigger.
Alternatively, terminal can also first identify the eye portion in face and the first mesh of eye portion region
Pixel quantity is marked, then calculates the ratio of total pixel quantity in the first object pixel quantity and image, the bigger table of ratio again
Show that eyes are bigger.Or terminal can only calculate second pixel quantity of the eyes shared by picture altitude direction, Yi Jitu
As the total pixel number amount of short transverse.Then, terminal calculates total picture in the second object pixel quantity and picture altitude direction again
The ratio of prime number amount, ratio is bigger, and expression eyes are bigger.
For example, it is used to indicate that the value range of the numerical value of the parameter preset of eyes size is 0~50, the bigger expression of numerical value
Human eye is bigger in image.For example, in first group of image the numerical value of the parameter preset of A1, A2, A3, A4 be followed successively by 40,41,42,
39。
In step S204, terminal obtains the first weight corresponding with clarity, and corresponding with parameter preset second
Weight.
For example, in getting first group of image after the numerical value of the corresponding clarity of each frame image and parameter preset, terminal
The first weight corresponding with clarity, and the second weight corresponding with parameter preset can be obtained.It is understood that the
One weight and the second weight and be 1.
In some embodiments, the numerical value of the first weight and the second weight can be set according to use demand.Than
Such as, under to the higher scene of image definition requirements, can the first weight corresponding with clarity be arranged larger,
The second weight corresponding with parameter preset is arranged smaller.For example, the first weight is 0.7, the second weight is 0.3, or
First weight is 0.6, and the second weight is 0.4, etc..And when need to collect big eye pattern as when, can will be corresponding with clarity
The first weight be arranged smaller, the second weight corresponding with parameter preset is arranged larger.For example, the first weight
Be 0.3, the second weight be 0.7 or first weight be 0.4, the second weight be 0.6, etc..
In other embodiments, the first weight and can be arranged according to the clarity difference between image in terminal
The size of two weights.For example, if terminal detects the clarity difference between each frame image in first group of image in predetermined threshold value
In range, i.e., when the clarity between each frame image is not much different, the first weight corresponding with clarity can be arranged for terminal
Must be smaller, and the second weight corresponding with parameter preset is arranged larger.For example, the first weight is 0.4, the second power
Weight is 0.6, etc..If terminal detects the clarity difference between each frame image in first group of image in preset threshold range
Outside, i.e., when the clarity difference between each frame image is larger, the first weight corresponding with clarity can be arranged greatly by terminal
Some, and the second weight corresponding with parameter preset is arranged smaller.For example, the first weight is 0.6, the second weight is
0.4, etc..
Certainly, in other embodiments, the numerical value of the first weight and the second weight can also be by user according to shooting need
It asks, sets itself.
In step S205, terminal is respectively to the numerical value of the clarity and parameter preset of each frame image in first group of image
It is normalized, obtains the numerical value of the clarity and the parameter preset after normalization after each frame image normalization.
In step S206, terminal according to first weight, respectively to the clarity after the normalization of each frame image into
Row weighting obtains the clarity after each frame image weighting, and according to second weight, respectively to the normalizing of each frame image
The numerical value of parameter preset after change is weighted, and obtains the numerical value of the parameter preset after each frame image weighting.
In step S207, terminal obtains clarity and weighting in first group of image after the weighting of each frame image respectively
The numerical value of parameter preset afterwards and value.
For example, step S205, S206 and S207 may include:
For example, the clarity of A1, A2, A3, A4 are followed successively by 80,83,81,79 in first group of image.A1, A2, A3, A4's
The numerical value of parameter preset is followed successively by 40,41,41,39.First weight is 0.4, and the second weight is 0.6.
So, for A1 images, terminal can first be normalized the numerical value of its clarity and parameter preset.For example,
Numerical value after clarity 80 normalizes is 0.8 (80/100), and the numerical value after the normalization of numerical value 40 of parameter preset is 0.8 (40/
50).Then, terminal can be weighted the clarity 0.8 after normalization according to the first weight 0.4, clear after being weighted
Clear degree, value are 0.32 (0.4*0.8).Meanwhile terminal can be according to the second weight 0.6 to the parameter preset after normalization
Numerical value 0.8 is weighted, the numerical value of the parameter preset after being weighted, and value is 0.48 (0.6*0.8).Then, terminal can be with
Calculate A1 images weighting after clarity 0.32 and weighting after parameter preset numerical value 0.48 and value, as 0.8.
For A2 images, terminal can also first be normalized the numerical value of its clarity and parameter preset.For example, clear
Numerical value after 83 normalization of clear degree is 0.83 (83/100), and the numerical value after the normalization of numerical value 41 of parameter preset is 0.82 (41/
50).Then, terminal can be weighted the clarity 0.83 after normalization according to the first weight 0.4, after being weighted
Clarity, value are 0.332 (0.4*0.83).Meanwhile terminal can be according to the second weight 0.6 to the default ginseng after normalization
Several numerical value 0.82 are weighted, the numerical value of the parameter preset after being weighted, and value is 0.492 (0.6*0.82).Then,
Terminal can calculate A2 images weighting after clarity 0.332 and weighting after parameter preset numerical value 0.492 and value, i.e.,
It is 0.824.
Similarly, it is 0.81 that terminal, which can calculate the clarity after the normalization of A3 images, the parameter preset after normalization
Numerical value be 0.82.Clarity after being weighted according to the first weight 0.4 is 0.324, pre- after being weighted according to the second weight 0.6
The numerical value of setting parameter is 0.492, then the numerical value 0.492 of the clarity 0.324 and the parameter preset after weighting after the weighting of A3 images
And value be 0.816.
It is 0.79 that terminal, which can calculate the clarity after the normalization of A4 images, the number of the parameter preset after normalization
Value is 0.78.Clarity after being weighted according to the first weight 0.4 is 0.316, the default ginseng after being weighted according to the second weight 0.6
Several numerical value is 0.468, then the sum of the numerical value 0.468 of the clarity 0.316 after the weighting of A4 images and the parameter preset after weighting
Value is 0.784.
In step S208, terminal is by first group of image, and the maximum image of value is determined as the first image, and from
The second image is determined in second group of image, and first image and second image are the images of synchronous acquisition.
For example, the clarity in obtaining first group of image after the weighting of each frame image and the parameter preset after weighting
Both numerical value and value after, terminal can will be determined as the first image with the maximum image of value.
For example, due to A1, A2, A3, A4 and value be followed successively by 0.8,0.824,0.816,0.784.Therefore, terminal can be with
A2 in first group of image is determined as the first image.
Then, terminal can determine the second image from second group of image.Second image can be and the first image
The image of synchronous acquisition.So, the B2 in second group of image can be determined as the second image by terminal.
In step S209, terminal executes following steps parallel:The depth of field is obtained according to first image and second image
Information, and noise reduction process is carried out to first image according to first group of image and obtains target image.
For example, after determining the first image and the second image, terminal can be obtained according to first image and the second image
Take depth of view information.It is understood that the first image and the second image are double camera modules in terminal from different location (angle
Degree) collected same target image, therefore depth of view information can be obtained according to the first image and the second image.It needs
Bright, which got after determining focusing object for the focusing object in image
Depth of view information.
Then, terminal can carry out noise reduction process according to first group of first image of image pair, to obtain target image.
For example, terminal can carry out at noise reduction first image according to other images in first group of image in addition to the first image
Reason.For example, the first image be A2 images, then terminal can using the first image A2 as the basic frame of noise reduction process, and according to
This 3 frame image of A1, A3, A4 in first group of image carries out noise reduction process to A2.That is, terminal can be according to this 3 frame of A1, A3, A4
Image recognition simultaneously reduces the random noise in basic frame A2 images, to obtain the target image by noise reduction.
Wherein, the step of depth of view information being obtained according to the first image and the second image, with according to first group of image to this
First image carries out noise reduction process and can be executed parallel the step of obtaining target image.It should be noted that the first image
Noise reduction process is carried out to have no effect on according to the first image and the second image acquisition depth of view information, therefore noise reduction process and acquisition scape
The step of deeply convinceing breath can execute parallel.
For example, in one embodiment, terminal can utilize central processing unit (Central Processing Unit,
CPU) the step of depth of view information is obtained according to the first photo and the second photo is executed, while terminal can utilize graphics process
Device (Graphics Processing Unit, GPU) carries out at noise reduction first image according to first group of image to execute
The step of managing and obtaining target image.
It is understood that above-mentioned two step executes parallel, the processing time of terminal can be saved, is improved to image
The efficiency handled.In some embodiments, terminal obtains the when a length of 800ms that depth of view information needs, and what noise reduction needed
Shi Changwei 400ms, as a result, by the way that depth of view information and noise reduction parallel processing will be obtained (for example, can pass through multi-threading parallel process
Realize), the processing time of 400ms can be saved, the image taking speed of terminal can be promoted.In addition, in some embodiments, one
A thread carried out in the 800ms times of depth of view information acquisition, another thread is in addition to carrying out noise reduction (being about 400ms when needing)
Outside, the processing such as U.S. face processing (200ms is about when needing), filter processing (100ms is about when needing) can also be carried out, to
When depth of view information obtains completion, target image is more handled, more processing times are saved, is further promoted
The image taking speed of terminal.
In other embodiments, other than obtaining depth of view information according to the first image and the second image, in two frames
In the case that difference between the acquisition time interval of image is short enough or collected each frame image is sufficiently small, terminal may be used also
Arbitrarily to choose a frame image from first group of image, and choose from second group of image with the frame picture synchronization collection to
Image obtains depth of view information further according to this two field pictures.For example, the first image is A2 in the present embodiment, the second image is B2.
So, in other embodiments, terminal can also arbitrarily choose a frame image from A1, A3, A4, for example choose A4 figures
Picture, then choose from second group of image with the B4 images of A4 picture synchronization collections, and depth of view information is obtained according to A4 and B4.
In addition, the difference between the acquisition time interval of two field pictures is short enough or collected each frame image is enough
In the case of small, terminal can also arbitrarily choose a frame image from first group of image, and arbitrarily be chosen from second group of image
One frame image, and obtain depth of view information according to this two field pictures.For example, terminal chooses A2 and B3, and obtained according to this two field pictures
Take depth of view information.
In one embodiment, first group of image includes at least two field pictures, and terminal is according to first group picture in S209
As the step of carrying out noise reduction process to first image, obtain target image may include steps of:
Terminal is by all image alignments in first group of image;
In the image of alignment, terminal is determined to belong in the pixel that multigroup pixel being mutually aligned and each group are mutually aligned
In the object pixel of the first image;
Terminal obtains the pixel value of each pixel in the pixel that each group is mutually aligned;
According to the pixel value of each pixel, terminal obtains the pixel value mean value for the pixel that each group is mutually aligned;
The pixel value of object pixel in first image is adjusted to the pixel value mean value by terminal, obtains the mesh
Logo image.
For example, first group of image includes A1, A2, A3, A4, wherein the first image is A2.So, terminal can be true by A2
It is set to the basic frame of noise reduction process.Then, terminal can utilize image alignment algorithm, by this 4 frame image pair of A1, A2, A3, A4
Together.
By after A1, A2, A3, A4 this 4 frame image alignment, the pixel being mutually aligned can be determined as one group of pass by terminal
The pixel of connection, to obtain multigroup pixel being mutually aligned.Then, terminal can will belong in pixel that each group is mutually aligned
The pixel of first image is determined as object pixel.Later, terminal can obtain each pixel in the pixel that each group is mutually aligned
Pixel value, and obtain the pixel value mean value for the pixel that each group is mutually aligned in turn.Then, terminal can be by the first image
In the pixel value of each object pixel be adjusted to the pixel value mean value of the group where the object pixel, the first figure after adjustment
Picture as target image.
There is pixel X3, A4 an image on pixel X2, A3 an image for example, having on A1 images and having on pixel X1, A2 an image
On have a pixel X4.Also, by image alignment algorithm it is found that pixel X1, X2, X3 and X4 are A1, A2, A3, A4 this 4 frame figures
Pixel as being in same aligned position after alignment.That is, pixel X1, X2, X3 and X4 are alignment.For example, the pixel of pixel X1
Value is 101, and the pixel value of pixel X2 is 102, and the pixel value of pixel X3 is 103, and the pixel value of pixel X4 is 104, then this
The average value of four pixel values is 102.5.After obtaining the average value 102.5, terminal can be by the pixel X2's in A2 images
Pixel value is adjusted to 102.5 by 102, to carry out noise reduction process to pixel X2.Similarly, in A2 images it is all A1,
After the pixel value of the position with snap to pixels is adjusted to corresponding average value on A3 and A4 images, obtained image is drop
Target image of making an uproar that treated.
In one embodiment, first it can also determine that clarity is maximum from A1, A2, A3, A4 this 4 frame image
One frame image, then assigns the pixel value of different frame different weights, further according to the calculated for pixel values average value after weighting,
And according to the pixel value on the basic frame A2 of average value adjustment after the weighting.
For example, on pixel Z3 and A4 image on pixel Z1, A3 image on pixel Z2 and A1 image on A2 images
Pixel Z4 alignment.Wherein, the pixel value that the pixel value of Z1 is 101, the pixel value of Z2 is 102, Z3 is the pixel value of 103, Z4
It is 104.Also, Z2 clarity in this 4 frame image is maximum.So, when calculating weighted mean, terminal can assign Z2's
Pixel value weight is 0.4, and the pixel value weight of Z1, Z3, Z4 are 0.2, then weighted mean is 102.4, wherein 102.4=
102*0.4+ (101+103+104)*0.2.After obtaining the weighted mean 102.4, terminal can by the pixel value of Z2 by
102 are adjusted to 102.4, to reduce the noise of the pixel.
In one embodiment, corresponding on A1, A2, A3 and A4 image if for some aligned position
Pixel value difference is larger, then terminal can not be adjusted the pixel value of the position on A2 images.For example, A2 images
On pixel Y2 and A1 image on pixel Y1, A3 image on pixel Y3 and A4 image on pixel Y4 alignment, but Y2
The pixel value that pixel value is 100, Y1 is 20, the pixel value of Y3 is 30, the pixel value of Y4 is 35, i.e. the pixel value of Y2 is much larger than
Y1, Y3 and Y4.In such a case, it is possible to be not adjusted to the pixel value of Y2.
In one embodiment, if this 4 frame image of A1, A2, A3, A4 can not be aligned, terminal can not be to base
The pixel value of each pixel of plinth frame A2 is adjusted, and directly using basis frame A2 as target image, carry out subsequent background
Virtualization is handled.
In step S210, according to the depth of view information, the terminal-pair target image carries out background blurring processing.
For example, after obtaining target image, terminal can carry out the target image according to the depth of view information got
Background blurring processing.
It should be noted that in the present embodiment, terminal can carry out multiframe noise reduction using first group of first image of image pair
Processing, its random noise of obtained target image are few.Also, the first image and second that terminal can be arrived according to synchronous acquisition
Image obtains depth of view information, therefore the depth of view information that terminal is got is more acurrate.It is i.e. less in the noise of target image, it obtains
In the case of the depth of view information that arrives is more accurate, it is also more preferable that the present embodiment carries out background blurring effect to the target image, that is, carries on the back
Target image its imaging effect after scape virtualization is good.
Further, since the step of carrying out multiframe noise reduction using first group of first image of image pair can with according to the first figure
The step of picture and the second image obtain depth of view information executes parallel, therefore the present embodiment can also improve image processing speed, have
Effect avoids the problem that processing speed is slow caused by multiframe noise reduction.
In one embodiment, in S209, terminal carries out noise reduction process according to first group of image to first image
The step of obtaining target image may include steps of:
Terminal carries out noise reduction process according to first group of first image of image pair, obtains the image after noise reduction;
Image after the terminal-pair noise reduction carries out tone mapping processing, obtains target image.
For example, the first image is A2 images, then terminal can using the first image A2 as the basic frame of noise reduction process,
And according to this 3 frame image recognition of A1, A3, A4 and the random noise in basic frame A2 images is reduced, to obtain the figure after noise reduction
Picture.
Then, terminal can carry out tone mapping processing (Tone Mapping) to the image after the noise reduction, to obtain
Target image.
It is understood that the image comparison of image can be improved by carrying out tone mapping processing to the image after the noise reduction
Degree, so that target image has higher dynamic range, imaging effect more preferable.
In another embodiment, terminal determines after getting first group of image from first group of image
The step of one image, can also include the following steps:
Terminal obtains the clarity of each frame image in first group of image, and clarity in each frame image is maximum
Image is determined as the first image.
For example, the clarity of A1, A2, A3, A4 are followed successively by 80,83,81,79 in first group of image.So, terminal can be with
A2 images are directly determined as the first image.That is, terminal can be according only to this dimension of clarity, from first group of image really
Make the first image.
In another embodiment, terminal can also include the following steps after getting first group of image
If each frame image of first group of image includes face, terminal obtains each frame image in first group of image
The numerical value of parameter preset, the numerical value of the parameter preset are used to indicate the eyes size of face in image;
The maximum image of numerical value of parameter preset in first group of image is determined as the first image.
For example, after getting first group of image, it includes people that terminal, which detects each frame image in first group of image,
Face, then terminal can obtain the numerical value of the parameter preset of each frame image in first group of image, the wherein number of the parameter preset
Eyes size of the value for indicating face in image, then, terminal can be directly by the numbers of parameter preset in first group of image
It is worth maximum image and is determined as the first image.
For example, the numerical value of the parameter preset of A1, A2, A3, A4 is followed successively by 40,41,42,39 in first group of image.So,
A2 directly can be determined as the first image by terminal.It is understood that A2 images be in first group of image human eye it is maximum
That frame image.That is, terminal can determine first according only to human eye size this dimension in image from first group of image
Image.
In one embodiment, in addition to can be according to the dimension of image definition and human eye size, from first group of image
In determine outside the first image, the smile degree of face this dimension can also be added in terminal, to determine the first image.For example,
The smile degree of image definition and face can be combined together by terminal, to determine the first image.Alternatively, terminal also may be used
The smile degree of human eye size and face to be combined together, to determine the first image.Or terminal can also will be schemed
The smile degree of image sharpness, human eye size and face is combined together, to determine first image, etc..
In some embodiments, the mode of detection face smile degree can be according to the tooth of face part and
The image recognition at the positions such as the corners of the mouth carries out.For example, terminal can identify corners of the mouth part and the corners of the mouth portion in image
The bending amplitude divided, bending degree is bigger it is considered that smile degree is bigger, etc..
Fig. 3 to Fig. 5 is please referred to, Fig. 3 to Fig. 5 is scene and the place of the processing method of image provided by the embodiments of the present application
Manage flow diagram.
For example, as shown in figure 3, being equipped with double camera modules 10 in terminal, which includes the first camera shooting mould
Group 11 and the second camera module 12.For example, the first camera module 11 can camera, the second camera module can be pair based on
Camera.In one embodiment, two cameras in double camera modules can be arrangement laterally side by side (such as Fig. 3 institutes
Show).In another embodiment, two cameras in double camera modules can also be longitudinal arranged in parallel.When terminal makes
When acquiring image with double camera modules 10, first camera module 11 and the second camera module 12 can be with synchronous acquisition images.
For example, user opens camera applications, and prepares to shoot photo, terminal interface enters image preview interface at this time.
It will be shown for the image for user's preview on the display screen of terminal.
When terminal acquires image using double camera modules, the first camera module and the second camera module can be synchronized and be adopted
Collect image.
Later, user clicks button of taking pictures, as shown in Figure 4.In the present embodiment, when detecting that user clicks bat
After button, terminal can be obtained from buffer queue before user clicks button of taking pictures, the first acquisition recently of camera module 11
The 12 nearest collected 4 frame image of 4 frame images and the second camera module arrived.For example, the first camera module acquires recently
To 4 photograph frames (first group of image) be followed successively by A1, A2, A3, A4.The 4 frame images that the nearest synchronous acquisition of second camera module arrives
(second group of image) is followed successively by B1, B2, B3, B4.It is understood that A1 and B1 are the images of synchronous acquisition, A2 and B2 are
The image of synchronous acquisition, A3 and B3 are the images of synchronous acquisition, and A4 and B4 are the images of synchronous acquisition.
Then, terminal can obtain the numerical value of the clarity of each frame image and parameter preset in first group of image.Its
In, the numerical value of parameter preset can be used to indicate that the eyes size of face in image.For example, the value range of clarity be 0~
100, the bigger expression image of numerical value of clarity is more clear.In first group of image the clarity of A1, A2, A3, A4 be followed successively by 80,
83、81、79.The value range of the numerical value of parameter preset is 0~50, and human eye is bigger in the bigger expression image of numerical value.First group picture
The numerical value of the parameter preset of A1, A2, A3, A4 is followed successively by 40,41,42,39 as in.
Later, terminal can obtain the first weight corresponding with clarity, and the second power corresponding with parameter preset
Weight.For example, the first weight is 0.4, the second weight is 0.6.
Then, for each frame image in first group of image, terminal can be to the numerical value of its clarity and parameter preset
It is normalized, obtains the numerical value of the clarity and the parameter preset after normalization after each frame image normalization.Then, terminal
The clarity after each frame image normalization can be weighted respectively according to first weight, after obtaining each frame image weighting
Clarity.Also, terminal can be according to second weight, respectively to the numerical value of the parameter preset after each frame image normalization
It is weighted, obtains the numerical value of the parameter preset after each frame image weighting.Finally, terminal can obtain each frame image and add respectively
Both the numerical value of parameter preset after clarity and weighting after power and value.
For example, for A1 images, terminal can first be normalized the numerical value of its clarity and parameter preset.For example,
Numerical value after clarity 80 normalizes is 0.8 (80/100), and the numerical value after the normalization of numerical value 40 of parameter preset is 0.8 (40/
50).Then, terminal can be weighted the clarity 0.8 after normalization according to the first weight 0.4, clear after being weighted
Clear degree, value are 0.32 (0.4*0.8).Meanwhile terminal can be according to the second weight 0.6 to the parameter preset after normalization
Numerical value 0.8 is weighted, the numerical value of the parameter preset after being weighted, and value is 0.48 (0.6*0.8).Then, terminal can be with
Calculate A1 images weighting after clarity 0.32 and weighting after parameter preset numerical value 0.48 and value, as 0.8.
Similarly, for A2 images, the clarity after weighting is 0.332, the numerical value of parameter preset after weighting is
0.492, both is 0.824 with value.For A3 images, the clarity after weighting is 0.324, the default ginseng after weighting
Several numerical value is 0.492, both is 0.816 with value.For A4 images, the clarity after weighting is 0.316, weights
The numerical value of parameter preset afterwards is 0.468, both is 0.784 with value.
Obtain A1, A2, A3, A4 this 4 frame image and after value, terminal can will be determined as with the maximum image of value
One image, first image is for the basic frame as noise reduction.It is understood that the first image is in first group of image
Human eye is larger and the higher image of clarity.For example, due to A2's and being worth maximum, A2 is determined as the first image.So
Afterwards, the B2 images that the second camera module is shot can be determined as the second image by terminal.
Then, terminal can utilize CPU, according to the first image A2 and the second image B2, obtain depth of view information.Also, eventually
End can utilize GPU, noise reduction process be carried out according to A1, A3, A4 couples of the first image A2 in first group of image, after obtaining noise reduction
A2 images, and be determined as target image.Wherein, terminal calculates the step of depth of view information and the step to A2 image noise reductions
Suddenly it can execute parallel, to improve processing speed.
Later, terminal can carry out background blurring processing according to the depth of view information got to the target image, to
Obtain output image.Then, which can be stored in photograph album by terminal.Entire process flow can be such as Fig. 5 institutes
Show.
Referring to Fig. 6, Fig. 6 is the structural schematic diagram of the processing unit of image provided by the embodiments of the present application.The place of image
Managing device 300 may include:First acquisition module 301, determining module 302, the second acquisition module 303, noise reduction module 304, with
And processing module 305.
First acquisition module 301 seems by institute for obtaining first group of image and second group of image, first group picture
The image of the first camera module acquisition is stated, second group picture seems the image acquired by second camera module.
For example, the first acquisition module 301 can first obtain the acquisition of the first camera module in double camera modules in terminal
First group of image and double camera modules in the second camera module acquisition second group of image.In first group of image
Also include multiple image including multiple image, in second group of image.
Determining module 302, for determining the first image from first group of image, and from second group of image
In determine that the second image, described first image and second image are the images of synchronous acquisition.
For example, first group of image A1, A2, A3, A4 of the acquisition of the first camera module are got in the first acquisition module 301,
And second camera module acquisition second group of image B1, B2, B3, B4 after, determining module 302 can be from A1, A2, A3, A4
It determines the first image, the second image is then determined from B1, B2, B3, B4.Wherein, which can be and first
The image of picture synchronization collection.
For example, the A2 in A1, A2, A3, A4 is determined as the first image by determining module 302, then terminal can be correspondingly
B2 is determined as the second image.
Second acquisition module 303, for obtaining depth of view information according to described first image and second image.
For example, after determining module 302 determines the first image and the second image, the second acquisition module 303 can basis
First image and the second image obtain depth of view information.It is understood that since first image and the second image are terminals
On double camera modules from different camera sites (angle) synchronous acquisition to image, therefore can be according to the first image and
Two images obtain depth of view information.
It should be noted that the depth of view information is for the focusing object in image, it is to determine to focus
The depth of view information got after object.
Noise reduction module 304 obtains mesh for carrying out noise reduction process to described first image according to first group of image
Logo image.
For example, noise reduction module 304 can carry out noise reduction process according to first group of first image of image pair, to obtain mesh
Logo image.For example, noise reduction module 304 can be according to other images in first group of image in addition to the first image, to first figure
As carrying out noise reduction process.For example, the first image is A2 images, then terminal can be using the first image A2 as noise reduction process
Basic frame, and noise reduction process is carried out to A2 according to this 3 frame image of A1, A3, A4 in first group of image.That is, noise reduction module 304
According to this 3 frame image recognition of A1, A3, A4 and the random noise in basic frame A2 images can be reduced, to obtain by noise reduction
Target image.
Processing module 305, for according to the depth of view information, default processing to be carried out to the target image.
For example, after obtaining target image, processing module 305 can be according to the depth of view information got, to the target figure
As carrying out default processing.In some embodiments, which can be such as background blurring and image 3D applications
Processing etc..
In one embodiment, the processing module 305 can be used for:The target image is carried out background blurring
Processing.
In one embodiment, the determining module 302 can be used for:Obtain each frame figure in first group of image
The clarity of picture;The maximum image of clarity in each frame image is determined as described first image.
In one embodiment, the determining module 302 can be used for:If each frame image of first group of image
Including face, then obtain the numerical value of the parameter preset of each frame image in first group of image, the numerical value of the parameter preset is used
In the eyes size for indicating face in image;The maximum image of numerical value of parameter preset in each frame image is determined as described
One image.
In one embodiment, the determining module 302 can be used for:Obtain each frame figure in first group of image
The clarity of picture;If each frame image of first group of image includes face, each frame image in first group of image is obtained
Parameter preset numerical value, the numerical value of the parameter preset is used to indicate the eyes size of face in image;According to each frame
The clarity of image and the numerical value of parameter preset determine described first image from first group of image.
In one embodiment, the determining module 302 can be used for:The first weight corresponding with clarity is obtained,
And the second weight corresponding with parameter preset;The clarity of each frame image is carried out respectively according to first weight
Weighting obtains the clarity after each frame image weighting, and according to second weight respectively to the pre- of each frame image
The numerical value of setting parameter is weighted, and obtains the numerical value of the parameter preset after each frame image weighting;Each frame is obtained respectively
The numerical value of the parameter preset after clarity and weighting after the weighting of image and value;By in first group of image, and value is most
Big image is determined as described first image.
In one embodiment, the determining module 302 can be used for:Respectively to the clarity of each frame image
It is normalized, obtains default after clarity and normalization after each frame image normalization with the numerical value of parameter preset
The numerical value of parameter;According to first weight, the clarity after the normalization of each frame image is weighted respectively, is obtained
Clarity to after each frame image weighting;According to second weight, respectively to the normalization of each frame image after
The numerical value of parameter preset is weighted, and obtains the numerical value of the parameter preset after each frame image weighting.
In one embodiment, the noise reduction module 304 can be used for:According to first group of image to described
One image carries out noise reduction process, obtains the image after noise reduction;Tone mapping processing is carried out to the image after the noise reduction, is obtained
The target image.
In one embodiment, first group of image includes at least two field pictures, and the noise reduction module 304 can be with
For:By all image alignments in first group of image;In the image of alignment, multigroup picture being mutually aligned is determined
Belong to the object pixel of the first image in the pixel that element and each group are mutually aligned;It obtains each in the pixel that each group is mutually aligned
The pixel value of pixel;According to the pixel value of each pixel, the pixel value mean value for the pixel that each group is mutually aligned is obtained;It will
The pixel value of object pixel in described first image is adjusted to the pixel value mean value, obtains the target image.
About the device in the embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
The embodiment of the present application provides a kind of computer-readable storage medium, computer program is stored thereon with, when described
When computer program executes on computers so that the computer executes the processing method such as image provided in this embodiment
In step.
The embodiment of the present application also provides a kind of electronic equipment, including memory, processor, and the processor passes through calling
The computer program stored in the memory, the step in processing method for executing image provided in this embodiment.
For example, above-mentioned electronic equipment can be the mobile terminals such as tablet computer or smart mobile phone.Referring to Fig. 7,
Fig. 7 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
The mobile terminal 400 may include the components such as camera module 401, memory 402, processor 403.This field skill
Art personnel are appreciated that mobile terminal structure shown in Fig. 7 does not constitute the restriction to mobile terminal, may include than figure
Show more or fewer components, either combines certain components or different components arrangement.
Camera module 401 can be the double camera modules etc. installed on mobile terminal.Wherein the camera module 401 to
Include the first camera module and the second camera module less, when mobile terminal acquires image using double camera modules, this first is taken the photograph
As module and second camera module can be with synchronous acquisition images.
Memory 402 can be used for storing application program and data.Including in the application program that memory 402 stores can
Execute code.Application program can form various functions module.Processor 403 is stored in the application of memory 402 by operation
Program, to perform various functions application and data processing.
Processor 403 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the application program being stored in memory 402, and is called and is stored in memory 402
Data execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.
In the present embodiment, the processor 403 in mobile terminal can be according to following instruction, by one or more
The corresponding executable code of process of application program be loaded into memory 402, and be stored in by processor 403 to run
Application program in memory 402, to realize following steps:
First group of image and second group of image are obtained, first group picture seems to be acquired by first camera module
Image, second group picture seem the image acquired by second camera module;Is determined from first group of image
One image, and the second image is determined from second group of image, described first image and second image be synchronize adopt
The image of collection;Depth of view information is obtained according to described first image and second image;According to first group of image to described
First image carries out noise reduction process, obtains target image;According to the depth of view information, default place is carried out to the target image
Reason.
The embodiment of the present invention also provides a kind of electronic equipment.Above-mentioned electronic equipment includes image processing circuit, at image
Managing circuit can utilize hardware and or software component to realize, it may include define picture signal and handle (Image Signal
Processing) the various processing units of pipeline.Image processing circuit at least may include:Camera, image-signal processor
(Image Signal Processor, ISP processor), control logic device, video memory and display etc..Wherein take the photograph
As head at least may include one or more lens and imaging sensor.
Imaging sensor may include colour filter array (such as Bayer filters).Imaging sensor can obtain and use image sensing
The luminous intensity and wavelength information that each imaging pixel of device captures, and it is former to provide can be handled by image-signal processor one group
Beginning image data.
Image-signal processor can handle raw image data pixel by pixel in various formats.For example, each image
Pixel can carry out one or more with the bit depth of 8,10,12 or 14 bits, image-signal processor to raw image data
The statistical information of a image processing operations, collection about image data.Wherein, image processing operations can be by identical or different
Bit depth precision carries out.Raw image data can be stored into video memory after image-signal processor is handled.Image
Signal processor can also receive image data from video memory.
Video memory can be that independent in the part of memory device, storage device or electronic equipment special is deposited
Reservoir, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the image data from video memory, image-signal processor can carry out one or more figures
As processing operation, such as time-domain filtering.Image data that treated can be transmitted to video memory, so as in shown advance
The other processing of row.Image-signal processor can also receive processing data from video memory, and be carried out to the processing data
Image real time transfer in original domain and in RGB and YCbCr color spaces.Image data that treated may be output to display
Device, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, graphics processor) into one
Step processing.In addition, the output of image-signal processor also can be transmitted to video memory, and display can be from video memory
Read image data.In one embodiment, video memory can be configured as realizing one or more frame buffers.
The statistical data that image-signal processor determines can be transmitted to control logic device.For example, statistical data may include certainly
The system of the imaging sensors such as dynamic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, correcting lens shadow
Count information.
Control logic device may include the processor and/or microcontroller that execute one or more routines (such as firmware).One
Or multiple routines can determine the control parameter and ISP control parameters of camera according to the statistical data of reception.For example, camera shooting
The control parameter of head may include camera flash control parameter, the control parameter of lens (such as focus or zoom focal length) or
The combination of these parameters.ISP control parameters may include for automatic white balance and color adjustment (for example, during RGB processing)
Gain level and color correction matrix etc..
Referring to Fig. 8, Fig. 8 is the structural schematic diagram of image processing circuit in the present embodiment.As shown in figure 8, for ease of saying
It is bright, the various aspects with the relevant image processing techniques of the embodiment of the present invention are only shown.
Image processing circuit may include:First camera 510, second camera 520, the first image-signal processor
530, the second image-signal processor 540, control logic device 550, video memory 560, display 570.Wherein, it first takes the photograph
As first 510 may include one or more first lens 511 and the first imaging sensor 512.Second camera 520 can wrap
Include one or more second lens 521 and the second imaging sensor 522.
First image transmitting of the first camera 510 acquisition is handled to the first image-signal processor 530.First
After image-signal processor 530 handles the first image, can by the statistical data of the first image (brightness of such as image, image it is anti-
Difference, color of image etc.) it is sent to control logic device 550.Control logic device 550 can determine that first takes the photograph according to statistical data
As first 510 control parameter, to which the first camera 510 can carry out the behaviour such as auto-focusing, automatic exposure according to control parameter
Make.First image can store after the first image-signal processor 530 is handled into video memory 560.First figure
As signal processor 530 can also read the image stored in video memory 560 to be handled.In addition, the first image passes through
Display 570 can be sent directly to after image-signal processor 530 is handled by, which crossing, is shown.Display 570 can also be read
Take the image in video memory 560 to be shown.
The second image transmitting that second camera 520 acquires is handled to the second image-signal processor 540.Second
After image-signal processor 540 handles the second image, can by the statistical data of the second image (brightness of such as image, image it is anti-
Difference, color of image etc.) it is sent to control logic device 550.Control logic device 550 can determine that second takes the photograph according to statistical data
As first 520 control parameter, to which second camera 520 can carry out the behaviour such as auto-focusing, automatic exposure according to control parameter
Make.Second image can store after the second image-signal processor 540 is handled into video memory 560.Second figure
As signal processor 540 can also read the image stored in video memory 560 to be handled.In addition, the second image passes through
Display 570 can be sent directly to after image-signal processor 540 is handled by, which crossing, is shown.Display 570 can also be read
Take the image in video memory 560 to be shown.
In other embodiments, the first image-signal processor and the second image-signal processor can also synthesize
Unified image-signal processor handles the data of the first imaging sensor and the second imaging sensor respectively.
In addition, not having displaying in figure, electronic equipment can also include CPU and power supply module.CPU and logic controller,
First image-signal processor, the second image-signal processor, video memory and display are all connected with, and CPU is for realizing complete
Office's control.Power supply module is used to power for modules.
In general, the mobile phone with double camera modules, under certain exposal models, double camera modules work.At this point,
It is that the first camera and second camera are powered that CPU, which controls power supply module,.Imaging sensor in first camera powers on, the
Imaging sensor in two cameras powers on, so as to realize the acquisition conversion of image.It, can be under certain exposal models
It is the camera work in double camera modules.For example, only focal length camera works.In this case, CPU controls power supply
Module is powered to the imaging sensor of corresponding camera.In embodiments herein, due to the depth of field to be carried out calculate and
Virtualization is handled, it is therefore desirable to what two camera shooting die heads worked at the same time.
In addition, double camera modules can be determined according to the size of terminal in the mounting distance of terminal and shooting effect determines.
In some embodiments, in order to keep the overlapped object degree that the first camera module and the second camera module are shot high, two can be taken the photograph
As module be mounted so as to it is more closer better, such as within 10mm.
It is the step of realizing the processing method of image provided in this embodiment with image processing techniques in Fig. 8 below:
First group of image and second group of image are obtained, first group picture seems to be acquired by first camera module
Image, second group picture seem the image acquired by second camera module;Is determined from first group of image
One image, and the second image is determined from second group of image, described first image and second image be synchronize adopt
The image of collection;Depth of view information is obtained according to described first image and second image;According to first group of image to described
First image carries out noise reduction process, obtains target image;According to the depth of view information, default place is carried out to the target image
Reason.
In one embodiment, when electronic equipment executes the step for carrying out default processing to the target image,
It can execute:Background blurring processing is carried out to the target image.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes described from institute
It states when determining the step of the first image in first group of image, can execute:Obtain each frame image in first group of image
Clarity;The maximum image of clarity in each frame image is determined as described first image.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes described from institute
It states when determining the step of the first image in first group of image, can execute:If each frame image of first group of image includes
Face, then obtain the numerical value of the parameter preset of each frame image in first group of image, and the numerical value of the parameter preset is used for table
The eyes size of face in diagram picture;The maximum image of numerical value of parameter preset in each frame image is determined as first figure
Picture.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes described from institute
It states when determining the step of the first image in first group of image, can execute:Obtain each frame image in first group of image
Clarity;If each frame image of first group of image includes face, the pre- of each frame image in first group of image is obtained
The numerical value of setting parameter, the numerical value of the parameter preset are used to indicate the eyes size of face in image;According to each frame image
Clarity and parameter preset numerical value, determine described first image from first group of image.
In one embodiment, electronic equipment executes the clarity and parameter preset according to each frame image
Numerical value when determining the step of the first image from first group of image, can execute:Obtain corresponding with clarity the
One weight, and the second weight corresponding with parameter preset;According to first weight respectively to the clear of each frame image
Degree is weighted, and obtains the clarity after each frame image weighting, and according to second weight respectively to each frame figure
The numerical value of the parameter preset of picture is weighted, and obtains the numerical value of the parameter preset after each frame image weighting;Institute is obtained respectively
State the numerical value of the clarity after the weighting of each frame image and the parameter preset after weighting and value;By in first group of image,
It is determined as described first image with the maximum image of value.
In one embodiment, electronic equipment execute it is described according to first weight respectively to each frame image
Clarity be weighted to obtain the clarity after each frame image weighting, and according to second weight respectively to described
The numerical value of the parameter preset of each frame image is weighted the step of obtaining the numerical value of the parameter preset after each frame image weighting
When, it can execute:The numerical value of the clarity to each frame image and parameter preset is normalized respectively, obtains described each
The numerical value of the parameter preset after clarity and normalization after frame image normalization;According to first weight, respectively to described
Clarity after the normalization of each frame image is weighted, and obtains the clarity after each frame image weighting;According to described
Two weights are respectively weighted the numerical value of the parameter preset after the normalization of each frame image, obtain each frame image
The numerical value of parameter preset after weighting.
In one embodiment, it is described according to described first image and second image obtain depth of view information step
Rapid and described the step of obtaining target image to described first image progress noise reduction process according to first group of image is by electricity
What sub- equipment executed parallel.
In one embodiment, electronic equipment execute it is described according to first group of image to described first image into
When row noise reduction process obtains the step of target image, it can execute:Described first image is carried out according to first group of image
Noise reduction process obtains the image after noise reduction;Tone mapping processing is carried out to the image after the noise reduction, obtains the target figure
Picture.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes the basis
When first group of image obtains the step of target image to described first image progress noise reduction process, it can execute:It will be described
All image alignments in first group of image;In the image of alignment, multigroup pixel being mutually aligned and each group phase are determined
Belong to the object pixel of the first image in the pixel being mutually aligned;Obtain the pixel of each pixel in the pixel that each group is mutually aligned
Value;According to the pixel value of each pixel, the pixel value mean value for the pixel that each group is mutually aligned is obtained;By first figure
The pixel value of object pixel as in is adjusted to the pixel value mean value, obtains the target image.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, the detailed description of the processing method above with respect to image is may refer to, details are not described herein again.
The processing method category of image in the processing unit and foregoing embodiments of described image provided by the embodiments of the present application
In same design, times provided in the processing method embodiment of described image can be run in the processing unit of described image
One method, specific implementation process refer to the processing method embodiment of described image, and details are not described herein again.
It should be noted that for the processing method of the embodiment of the present application described image, those of ordinary skill in the art
It is appreciated that realize all or part of flow of the processing method of the embodiment of the present application described image, is that can pass through computer
Program is completed to control relevant hardware, and the computer program can be stored in a computer read/write memory medium, such as
Storage in memory, and is executed by least one processor, may include the processing method such as described image in the process of implementation
Embodiment flow.Wherein, the storage medium can be magnetic disc, CD, read-only memory (ROM, Read Only
Memory), random access memory (RAM, Random Access Memory) etc..
For the processing unit of the described image of the embodiment of the present application, each function module can be integrated at one
Manage chip in, can also be that modules physically exist alone, can also two or more modules be integrated in a module
In.The form that hardware had both may be used in above-mentioned integrated module is realized, can also be realized in the form of software function module.
If the integrated module is realized in the form of software function module and when sold or used as an independent product, also may be used
To be stored in a computer read/write memory medium, the storage medium is for example read-only memory, disk or CD
Deng.
Processing method, device, storage medium and the electronics of a kind of image provided above the embodiment of the present application are set
Standby to be described in detail, principle and implementation of the present invention are described for specific case used herein, above
The explanation of embodiment is merely used to help understand the method and its core concept of the present invention;Meanwhile for those skilled in the art
Member, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion this explanation
Book content should not be construed as limiting the invention.
Claims (13)
1. a kind of processing method of image, be applied to terminal, which is characterized in that the terminal include at least the first camera module and
Second camera module, the method includes:
First group of image and second group of image are obtained, first group picture seems the image acquired by first camera module,
Second group picture seems the image acquired by second camera module;
The first image is determined from first group of image, and determines the second image from second group of image, it is described
First image and second image are the images of synchronous acquisition;
Depth of view information is obtained according to described first image and second image;
Noise reduction process is carried out to described first image according to first group of image, obtains target image;
According to the depth of view information, default processing is carried out to the target image.
2. the processing method of image according to claim 1, which is characterized in that described to be preset to the target image
The step of processing, including:
Background blurring processing is carried out to the target image.
3. the processing method of image according to claim 1 or 2, which is characterized in that first group of image includes at least
Two field pictures;
Described the step of determining the first image from first group of image, including:
Obtain the clarity of each frame image in first group of image;
The maximum image of clarity in each frame image is determined as described first image.
4. the processing method of image according to claim 1 or 2, which is characterized in that first group of image includes at least
Two field pictures;
Described the step of determining the first image from first group of image, including:
If each frame image of first group of image includes face, the default ginseng of each frame image in first group of image is obtained
Several numerical value, the numerical value of the parameter preset are used to indicate the eyes size of face in image;
The maximum image of numerical value of parameter preset in each frame image is determined as described first image.
5. the processing method of image according to claim 1 or 2, which is characterized in that first group of image includes at least
Two field pictures;
Described the step of determining the first image from first group of image, including:
Obtain the clarity of each frame image in first group of image;
If each frame image of first group of image includes face, the default ginseng of each frame image in first group of image is obtained
Several numerical value, the numerical value of the parameter preset are used to indicate the eyes size of face in image;
According to the numerical value of the clarity and parameter preset of each frame image, described first is determined from first group of image
Image.
6. the processing method of image according to claim 5, which is characterized in that described according to the clear of each frame image
The step of numerical value of degree and parameter preset determines the first image from first group of image, including:
Obtain the first weight corresponding with clarity, and the second weight corresponding with parameter preset;
The clarity of each frame image is weighted respectively according to first weight, after obtaining each frame image weighting
Clarity, and the numerical value of the parameter preset of each frame image is weighted respectively according to second weight, obtains institute
State the numerical value of the parameter preset after each frame image weighting;
Obtain respectively the numerical value of the clarity after the weighting of each frame image and the parameter preset after weighting and value;
By in first group of image, and the maximum image of value is determined as described first image.
7. the processing method of image according to claim 6, which is characterized in that described right respectively according to first weight
The clarity of each frame image is weighted to obtain the clarity after each frame image weighting, and according to second weight
The numerical value of the parameter preset of each frame image is weighted respectively to obtain the parameter preset after each frame image weights
The step of numerical value, including:
The numerical value of the clarity to each frame image and parameter preset is normalized respectively, obtains each frame image normalizing
The numerical value of the parameter preset after clarity and normalization after change;
According to first weight, the clarity after the normalization of each frame image is weighted respectively, is obtained described each
Clarity after the weighting of frame image;
According to second weight, the numerical value of the parameter preset after the normalization of each frame image is weighted respectively, is obtained
The numerical value of parameter preset after to each frame image weighting.
8. the processing method of image according to claim 1 or 2, which is characterized in that the method includes:
It is described according to described first image and second image obtain depth of view information the step of and it is described according to described first group
Image carries out the step of noise reduction process obtains target image to described first image and executes parallel.
9. the processing method of image according to claim 1, which is characterized in that it is described according to first group of image to institute
It states the first image and carries out the step of noise reduction process obtains target image, including:
Noise reduction process is carried out to described first image according to first group of image, obtains the image after noise reduction;
Tone mapping processing is carried out to the image after the noise reduction, obtains the target image.
10. the processing method of image according to claim 1, which is characterized in that first group of image includes at least two
Frame image;
It is described that the step of noise reduction process obtains target image is carried out to described first image according to first group of image, including:
By all image alignments in first group of image;
In the image of alignment, determine to belong to the first figure in the pixel that multigroup pixel being mutually aligned and each group are mutually aligned
The object pixel of picture;
Obtain the pixel value of each pixel in the pixel that each group is mutually aligned;
According to the pixel value of each pixel, the pixel value mean value for the pixel that each group is mutually aligned is obtained;
The pixel value of object pixel in described first image is adjusted to the pixel value mean value, obtains the target image.
11. a kind of processing unit of image is applied to terminal, which is characterized in that the terminal includes at least the first camera module
With the second camera module, described device includes:
First acquisition module seems to be taken the photograph by described first for obtaining first group of image and second group of image, first group picture
As the image that module acquires, second group picture seems the image acquired by second camera module;
Determining module for determining the first image from first group of image, and is determined from second group of image
Second image, described first image and second image are the images of synchronous acquisition;
Second acquisition module, for obtaining depth of view information according to described first image and second image;
Noise reduction module obtains target image for carrying out noise reduction process to described first image according to first group of image;
Processing module, for according to the depth of view information, default processing to be carried out to the target image.
12. a kind of storage medium, is stored thereon with computer program, which is characterized in that when the computer program is in computer
When upper execution so that the computer executes the method as described in any one of claims 1 to 10.
13. a kind of electronic equipment, including memory, processor, which is characterized in that the processor is by calling the memory
The computer program of middle storage, for executing the method as described in any one of claims 1 to 10.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097896.8A CN108282616B (en) | 2018-01-31 | 2018-01-31 | Processing method, device, storage medium and the electronic equipment of image |
PCT/CN2018/122872 WO2019148997A1 (en) | 2018-01-31 | 2018-12-21 | Image processing method and device, storage medium, and electronic apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097896.8A CN108282616B (en) | 2018-01-31 | 2018-01-31 | Processing method, device, storage medium and the electronic equipment of image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108282616A true CN108282616A (en) | 2018-07-13 |
CN108282616B CN108282616B (en) | 2019-10-25 |
Family
ID=62807210
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810097896.8A Active CN108282616B (en) | 2018-01-31 | 2018-01-31 | Processing method, device, storage medium and the electronic equipment of image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108282616B (en) |
WO (1) | WO2019148997A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109862262A (en) * | 2019-01-02 | 2019-06-07 | 上海闻泰电子科技有限公司 | Image weakening method, device, terminal and storage medium |
WO2019148997A1 (en) * | 2018-01-31 | 2019-08-08 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium, and electronic apparatus |
CN116701675A (en) * | 2022-02-25 | 2023-09-05 | 荣耀终端有限公司 | Image data processing method and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070189750A1 (en) * | 2006-02-16 | 2007-08-16 | Sony Corporation | Method of and apparatus for simultaneously capturing and generating multiple blurred images |
CN104780313A (en) * | 2015-03-26 | 2015-07-15 | 广东欧珀移动通信有限公司 | Image processing method and mobile terminal |
CN105827964A (en) * | 2016-03-24 | 2016-08-03 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN106878605A (en) * | 2015-12-10 | 2017-06-20 | 北京奇虎科技有限公司 | The method and electronic equipment of a kind of image generation based on electronic equipment |
CN107613199A (en) * | 2016-06-02 | 2018-01-19 | 广东欧珀移动通信有限公司 | Blur photograph generation method, device and mobile terminal |
CN107635093A (en) * | 2017-09-18 | 2018-01-26 | 维沃移动通信有限公司 | A kind of image processing method, mobile terminal and computer-readable recording medium |
CN108024054A (en) * | 2017-11-01 | 2018-05-11 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
CN108055452A (en) * | 2017-11-01 | 2018-05-18 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108282616B (en) * | 2018-01-31 | 2019-10-25 | Oppo广东移动通信有限公司 | Processing method, device, storage medium and the electronic equipment of image |
-
2018
- 2018-01-31 CN CN201810097896.8A patent/CN108282616B/en active Active
- 2018-12-21 WO PCT/CN2018/122872 patent/WO2019148997A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070189750A1 (en) * | 2006-02-16 | 2007-08-16 | Sony Corporation | Method of and apparatus for simultaneously capturing and generating multiple blurred images |
CN104780313A (en) * | 2015-03-26 | 2015-07-15 | 广东欧珀移动通信有限公司 | Image processing method and mobile terminal |
CN106878605A (en) * | 2015-12-10 | 2017-06-20 | 北京奇虎科技有限公司 | The method and electronic equipment of a kind of image generation based on electronic equipment |
CN105827964A (en) * | 2016-03-24 | 2016-08-03 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN107613199A (en) * | 2016-06-02 | 2018-01-19 | 广东欧珀移动通信有限公司 | Blur photograph generation method, device and mobile terminal |
CN107635093A (en) * | 2017-09-18 | 2018-01-26 | 维沃移动通信有限公司 | A kind of image processing method, mobile terminal and computer-readable recording medium |
CN108024054A (en) * | 2017-11-01 | 2018-05-11 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
CN108055452A (en) * | 2017-11-01 | 2018-05-18 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019148997A1 (en) * | 2018-01-31 | 2019-08-08 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium, and electronic apparatus |
CN109862262A (en) * | 2019-01-02 | 2019-06-07 | 上海闻泰电子科技有限公司 | Image weakening method, device, terminal and storage medium |
CN116701675A (en) * | 2022-02-25 | 2023-09-05 | 荣耀终端有限公司 | Image data processing method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108282616B (en) | 2019-10-25 |
WO2019148997A1 (en) | 2019-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11563897B2 (en) | Image processing method and apparatus which determines an image processing mode based on status information of the terminal device and photographing scene information | |
CN105100615B (en) | A kind of method for previewing of image, device and terminal | |
CN110198417A (en) | Image processing method, device, storage medium and electronic equipment | |
EP3067746B1 (en) | Photographing method for dual-camera device and dual-camera device | |
WO2019237992A1 (en) | Photographing method and device, terminal and computer readable storage medium | |
CN107454343B (en) | Photographic method, camera arrangement and terminal | |
WO2021047345A1 (en) | Image noise reduction method and apparatus, and storage medium and electronic device | |
CN109993722B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN105827935B (en) | A kind of method and terminal of terminal sectional drawing | |
CN107948500A (en) | Image processing method and device | |
KR20150099302A (en) | Electronic device and control method of the same | |
CN108055452A (en) | Image processing method, device and equipment | |
CN108024054A (en) | Image processing method, device and equipment | |
CN110213502A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108520493A (en) | Processing method, device, storage medium and the electronic equipment that image is replaced | |
CN110445986A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110012227B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN108282616B (en) | Processing method, device, storage medium and the electronic equipment of image | |
CN110198418A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110266954A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110198419A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108401110A (en) | Acquisition methods, device, storage medium and the electronic equipment of image | |
CN108574803B (en) | Image selection method and device, storage medium and electronic equipment | |
CN110290325A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108520036B (en) | Image selection method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |