CN108282616B - Processing method, device, storage medium and the electronic equipment of image - Google Patents
Processing method, device, storage medium and the electronic equipment of image Download PDFInfo
- Publication number
- CN108282616B CN108282616B CN201810097896.8A CN201810097896A CN108282616B CN 108282616 B CN108282616 B CN 108282616B CN 201810097896 A CN201810097896 A CN 201810097896A CN 108282616 B CN108282616 B CN 108282616B
- Authority
- CN
- China
- Prior art keywords
- image
- group
- numerical value
- frame
- clarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 81
- 238000011946 reduction process Methods 0.000 claims abstract description 45
- 230000001360 synchronised effect Effects 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 21
- 230000009467 reduction Effects 0.000 claims description 36
- 238000010606 normalization Methods 0.000 claims description 31
- 238000010586 diagram Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 19
- 230000000694 effects Effects 0.000 abstract description 11
- 230000008569 process Effects 0.000 description 10
- 239000000872 buffer Substances 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000005452 bending Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 241000288673 Chiroptera Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 241001396014 Priacanthus arenatus Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
This application discloses a kind of processing method of image, device, storage medium and electronic equipments.This method is applied to terminal, the terminal includes at least the first camera module and the second camera module, this method comprises: obtaining first group of image and second group of image, which seems the image acquired by first camera module, which seems the image acquired by second camera module;The first image is determined from first group of image, and determines the second image from second group of image, and first image and second image are the images of synchronous acquisition;Depth of view information is obtained according to first image and second image;Noise reduction process is carried out to first image according to first group of image, obtains target image;According to the depth of view information, default processing is carried out to the target image.The imaging effect of image can be improved in the present embodiment.
Description
Technical field
The application belong to technical field of image processing more particularly to a kind of processing method of image, device, storage medium and
Electronic equipment.
Background technique
With the continuous development of hardware technology, the configuration for the hardware installed in terminal is also higher and higher.Currently, many terminals
All it is equipped with double camera modules.By means of double camera modules, the horizontal available promotion by a relatively large margin of taking pictures of terminal.Than
Such as, using double camera modules of colour imagery shot and black and white camera collocation composition, terminal can be made to capture more when taking pictures
More details.And double camera modules of two colour imagery shot collocation compositions are used, terminal can be made to possess when taking pictures double
Light-inletting quantity etc..
Summary of the invention
The embodiment of the present application provides processing method, device, storage medium and the electronic equipment of a kind of image, and figure can be improved
The imaging effect of picture.
The embodiment of the present application provides a kind of processing method of image, is applied to terminal, and the terminal includes at least first and takes the photograph
As mould group and the second camera module, which comprises
First group of image and second group of image are obtained, first group picture seems the figure acquired by first camera module
Picture, second group picture seem the image acquired by second camera module;
The first image is determined from first group of image, and determines the second image from second group of image,
The first image and second image are the images of synchronous acquisition;
Depth of view information is obtained according to the first image and second image;
Noise reduction process is carried out to the first image according to first group of image, obtains target image;
According to the depth of view information, default processing is carried out to the target image.
The embodiment of the present application provides a kind of processing unit of image, is applied to terminal, and the terminal includes at least first and takes the photograph
As mould group and the second camera module, described device include:
First obtains module, and for obtaining first group of image and second group of image, first group picture seems by described the
The image of one camera module acquisition, second group picture seem the image acquired by second camera module;
Determining module, for determining the first image from first group of image, and from second group of image really
The second image is made, the first image and second image are the images of synchronous acquisition;
Second obtains module, for obtaining depth of view information according to the first image and second image;
Noise reduction module obtains target figure for carrying out noise reduction process to the first image according to first group of image
Picture;
Processing module, for carrying out default processing to the target image according to the depth of view information.
The embodiment of the present application provides a kind of storage medium, is stored thereon with computer program, when the computer program exists
When being executed on computer, so that the computer executes the step in the processing method of image provided by the embodiments of the present application.
The embodiment of the present application also provides a kind of electronic equipment, including memory, and processor, the processor is by calling institute
The computer program stored in memory is stated, the step in processing method for executing image provided by the embodiments of the present application.
In the present embodiment, terminal can use first group of image and carry out noise reduction process, obtained target figure to the first image
As its random noise is few.Also, the first image and the second image that terminal can be arrived according to synchronous acquisition obtain depth of view information, because
The depth of view information that this terminal is got is more acurrate.I.e. less in the noise of target image, the depth of view information got is more accurate
In the case of, the present embodiment carries out that background blurring effect is also more preferable to the target image, i.e., it is background blurring after target image its
Imaging effect is good.Further, since using first group of image to the first image carry out multiframe noise reduction the step of can with according to first
The step of image and the second image obtain depth of view information executes parallel, therefore the present embodiment can also improve image processing speed,
The slow problem of processing speed caused by effectively avoiding because of multiframe noise reduction.
Detailed description of the invention
With reference to the accompanying drawing, by the way that detailed description of specific embodiments of the present invention, technical solution of the present invention will be made
And its advantages are apparent.
Fig. 1 is the flow diagram of the processing method of image provided by the embodiments of the present application.
Fig. 2 is another flow diagram of the processing method of image provided by the embodiments of the present application.
Fig. 3 to Fig. 5 is the scene and processing flow schematic diagram of the processing method of image provided by the embodiments of the present application.
Fig. 6 is the structural schematic diagram of the processing unit of image provided by the embodiments of the present application.
Fig. 7 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
Fig. 8 is the structural schematic diagram of electronic equipment provided by the embodiments of the present application.
Specific embodiment
Schema is please referred to, wherein identical component symbol represents identical component, the principle of the present invention is to implement one
It is illustrated in computing environment appropriate.The following description be based on illustrated by the specific embodiment of the invention, should not be by
It is considered as the limitation present invention other specific embodiments not detailed herein.
It is understood that the executing subject of the embodiment of the present application can be the end of smart phone or tablet computer etc.
End equipment.
Referring to Fig. 1, Fig. 1 is the flow diagram of the processing method of image provided by the embodiments of the present application.The image
Processing method can be applied to terminal.The terminal can be smart phone or tablet computer etc. equipped with double camera modules
Any electronic equipment.The process of the processing method of the image may include:
In step s101, it obtains first group of image and second group of image, first group picture seems by the first camera module
The image of acquisition, second group picture seem the image acquired by the second camera module.
With the continuous development of hardware technology, the configuration for the hardware installed in terminal is also higher and higher.Currently, many terminals
All it is equipped with double camera modules.By means of double camera modules, the horizontal available promotion by a relatively large margin of taking pictures of terminal.Than
Such as, using double camera modules of colour imagery shot and black and white camera collocation composition, terminal can be made to capture more when taking pictures
More details.And double camera modules of two colour imagery shot collocation compositions are used, terminal can be made to possess when taking pictures double
Light-inletting quantity etc..However, in the related technology, carry double camera modules terminal its handle obtained image, imaging effect compared with
Difference.
In the step S101 of the embodiment of the present application, for example, first taking the photograph of can first obtaining in its double camera module of terminal
Second group of image that the second camera module in first group of image and double camera modules acquired as mould group acquires.This
Include multiple image in one group of image, also includes multiple image in second group of image.
In some embodiments, after user opens camera applications and before pressing and taking pictures button, double camera shooting moulds of terminal
Group can constantly synchronous acquisition image, these acquired images be possibly stored in buffer queue.Terminal can be slow from this
It deposits and obtains image in queue, and show on a terminal screen, for user's preview.
In one embodiment, which can be fixed length queue.For example, the length of the buffer queue is 4
Element.That is, storing camera module collected 4 frame image recently in the buffer queue.For example, the first camera module is corresponding
Its collected 4 frame image recently has been cached in first queue, has cached it most in the corresponding second queue of the second camera module
The 4 frame images closely arrived with the first camera module synchronous acquisition.Also, it is collected that back acquired image can override front
Image.For example, the sequence successive according to acquisition time, this 4 frame image of A1, A2, A3, A4 is cached in first queue, when first
When camera module collects A5 image, terminal can be by the A1 image-erasing in first queue, and is inserted into A5 image, so that
First queue becomes A2, A3, A4, A5.
In one embodiment, when terminal is using double camera module acquisition images, the first camera module and second is taken the photograph
As mould group can be with synchronous acquisition image.For example, the first camera module collects A1 image, the second camera module can be with synchronous acquisition
To B1 image.For example, cached this 4 frame image of A1, A2, A3, A4 in first queue, cached in second queue B1, B2, B3,
This 4 frame image of B4, then A1 and B1 are the images of synchronous acquisition, A2 and B2 are the images of synchronous acquisition, A3 and B3 be synchronize adopt
The image of collection, A4 and B4 are the images of synchronous acquisition.
Certainly, after user presses and takes pictures button, the camera module of terminal can also acquire image.
Therefore, in some embodiments, first group of image can be only comprising user press take pictures button before by first
Camera module acquired image, or only press comprising user take pictures button after by the first camera module acquired image,
Or not only press comprising user take pictures button before by the first camera module acquired image, but also include that user presses button of taking pictures
Afterwards by the first camera module acquired image.
For example, the first camera module collects this 4 frame image of A1, A2, A3, A4 before user presses and takes pictures button, with
Family is pressed take pictures button after, the first camera module collects this 4 photograph frame of A5, A6, A7, A8.So, first group of image can be
A1, A2, A3, A4 perhaps A2, A3, A4, A5 perhaps A3, A4, A5, A6 or A5, A6, A7, A8 etc..In some embodiments
In, first group of image can be the collected successive frame of the first camera, be also possible to discontinuous frame, such as A2, A3, A5, A6
Deng.Second group of image is similarly.
In step s 102, the first image is determined from first group of image, and is determined from second group of image
Second image, first image and second image are the images of synchronous acquisition.
For example, in first group of image A1, A2, A3, A4 and the second camera module for getting the acquisition of the first camera module
Acquisition second group of image B1, B2, B3, B4 after, terminal can determine the first image from A1, A2, A3, A4, then from B1,
The second image is determined in B2, B3, B4.Wherein, which can be the image with the first picture synchronization collection.
For example, the A2 in A1, A2, A3, A4 is determined as the first image by terminal, then terminal can correspondingly determine B2
For the second image.
In step s 103, depth of view information is obtained according to first image and second image.
For example, terminal can be obtained according to first image and the second image after determining the first image and the second image
Take depth of view information.It is understood that since first image and the second image are double camera modules in terminal from different bats
It acts as regent and sets the image that (angle) synchronous acquisition arrives, therefore depth of view information can be obtained according to the first image and the second image.
It should be noted that the depth of view information is for the focusing object in image, it is to determine to focus
The depth of view information got after object.
In step S104, noise reduction process is carried out to first image according to first group of image, obtains target image.
For example, terminal can be according at least two frames in first group of image when first group of image is including at least two field pictures
Image carries out noise reduction process to the first image, to obtain target image.For example, terminal can according in first group of image except the
Other images outside one image carry out noise reduction process to first image.For example, the first image is A2 image, then terminal can
Using the basic frame by the first image A2 as noise reduction process, and according to A1, A3, A4 in first group of image this 3 frame image to A2
Carry out noise reduction process.That is, terminal can according to this 3 frame image recognition of A1, A3, A4 and reduce in basic frame A2 image at random making an uproar
Point, to obtain the target image by noise reduction.
In some embodiments, noise reduction process can also be carried out to the first image using image noise reduction algorithm, for example, image
Noise reduction algorithm may include wavelet de-noising algorithm, smothing filtering algorithm etc..
In step s105, according to the depth of view information, default processing is carried out to the target image.
For example, terminal can carry out the target image pre- according to the depth of view information got after obtaining target image
If processing.
In some embodiments, which can be background blurring and image 3D application processing etc..
It is understood that terminal can carry out noise reduction process to the first image according to first group of image in the present embodiment,
Obtained target image noise is few.Also, the first image and the second image that terminal can be arrived according to synchronous acquisition obtain the depth of field
Information, therefore the depth of view information that terminal is got is more acurrate.Therefore, terminal carries out the target image according to the depth of view information pre-
If processing, image its imaging effect obtained after processing can be made more preferable.
Referring to Fig. 2, Fig. 2 is another flow diagram of the processing method of image provided by the embodiments of the present application, process
May include:
In step s 201, terminal obtains first group of image and second group of image, which seems by the first camera shooting
The image of mould group acquisition, second group picture seem the image acquired by the second camera module.
For example, being equipped with double camera modules in terminal, which includes the first camera module and the second camera shooting mould
Group.Wherein, first camera module and second camera module can be with synchronous acquisition images.It include multiframe in first group of image
Image also includes multiple image in second group of image.Terminal can be under same photographed scene fastly using double camera modules
Multiple image of the speed acquisition about identical reference object.
It is described previously such as the present embodiment, for example, the first camera module collect first group of image be A1, A2, A3, A4 this
4 frame images.The collected second group of image of second camera module is this 4 frame image of B1, B2, B3, B4.
In step S202, terminal obtains the clarity of each frame image in first group of image.
For example, after first group of image A1, A2, A3, A4 for getting the acquisition of the first camera module, the available figure of terminal
As the clarity of A1, A2, A3, A4.
For example, the value range of the numerical value of the clarity of image is 0~100, the bigger expression image of the numerical value of clarity is more
Clearly.For example, the clarity of A1, A2, A3, A4 are followed successively by 80,83,81,79 in first group of image.
In step S203, if each frame image of first group of image includes face, terminal obtains first group of image
In each frame image parameter preset numerical value, the numerical value of the parameter preset is used to indicate the eyes size of face in image.
For example, terminal detects each frame in first group of image after the clarity for getting image A1, A2, A3, A4
Image includes face, then terminal can further obtain the numerical value of the parameter preset of each frame image in first group of image.
Wherein, the numerical value of the parameter preset can be used to indicate that the eyes size of face in image.
In one embodiment, terminal can be big come the eyes for obtaining face in image by some preset algorithms
Small, these algorithms can export one for indicating the numerical value of eyes size, and it is bigger to can be the bigger expression eyes of numerical value.
Alternatively, terminal can also first identify the eye portion in face and the first mesh of eye portion region
Pixel quantity is marked, then calculates the ratio of pixel quantity total in the first object pixel quantity and image, the bigger table of ratio again
Show that eyes are bigger.Or terminal can only calculate eyes the second pixel quantity and image shared by picture altitude direction
The total pixel number amount of short transverse.Then, terminal calculates total pixel in the second object pixel quantity and picture altitude direction again
The ratio of quantity, ratio is bigger, and expression eyes are bigger.
For example, the value range of the numerical value of the parameter preset for indicating eyes size is 0~50, the bigger expression figure of numerical value
Human eye is bigger as in.For example, the numerical value of the parameter preset of A1, A2, A3, A4 is followed successively by 40,41,42,39 in first group of image.
In step S204, terminal obtains the first weight corresponding with clarity, and corresponding with parameter preset second
Weight.
For example, in getting first group of image after the numerical value of the corresponding clarity of each frame image and parameter preset, terminal
Available the first weight corresponding with clarity, and the second weight corresponding with parameter preset.It is understood that first
Weight and the second weight and be 1.
In some embodiments, the numerical value of the first weight and the second weight can be set according to use demand.Than
Such as, under to the higher scene of image definition requirements, the first weight corresponding with clarity can be arranged larger, it will
The second weight corresponding with parameter preset is arranged smaller.For example, the first weight is 0.7, the second weight is 0.3, Huo Zhe
One weight is 0.6, and the second weight is 0.4, etc..It, can will be corresponding with clarity and when needing to collect big eye image
First weight is arranged smaller, is arranged larger by the second weight corresponding with parameter preset.For example, the first weight is
0.3, the second weight be 0.7 or first weight be 0.4, the second weight be 0.6, etc..
In other embodiments, the first weight and second can be arranged according to the clarity difference between image in terminal
The size of weight.For example, if terminal detects the clarity difference between each frame image in first group of image in preset threshold model
In enclosing, i.e., when the clarity between each frame image is not much different, terminal the first weight corresponding with clarity can be arranged
It is smaller, and the second weight corresponding with parameter preset is arranged larger.For example, the first weight is 0.4, the second weight is
0.6, etc..If terminal detect the clarity difference between each frame image in first group of image outside preset threshold range, i.e.,
When clarity difference between each frame image is larger, the first weight corresponding with clarity can be arranged larger by terminal,
And the second weight corresponding with parameter preset is arranged smaller.For example, the first weight is 0.6, the second weight is 0.4, etc.
Deng.
Certainly, in other embodiments, the numerical value of the first weight and the second weight can also be by user according to shooting need
It asks, sets itself.
In step S205, terminal respectively the numerical value of the clarity to each frame image in first group of image and parameter preset into
Row normalization, the numerical value of the parameter preset after clarity and normalization after obtaining each frame image normalization.
In step S206, terminal according to first weight, respectively to the clarity after the normalization of each frame image into
Row weighting, the clarity after obtaining each frame image weighting, and according to second weight, respectively to the normalization of each frame image
The numerical value of parameter preset afterwards is weighted, the numerical value of the parameter preset after obtaining each frame image weighting.
In step S207, after terminal obtains clarity and weighting in first group of image after the weighting of each frame image respectively
Parameter preset numerical value and value.
For example, step S205, S206 and S207 may include:
For example, the clarity of A1, A2, A3, A4 are followed successively by 80,83,81,79 in first group of image.A1, A2, A3, A4's is pre-
The numerical value of setting parameter is followed successively by 40,41,41,39.First weight is 0.4, and the second weight is 0.6.
So, for A1 image, terminal can the numerical value first to its clarity and parameter preset be normalized.For example,
Numerical value after clarity 80 normalizes is 0.8 (80/100), and the numerical value after the numerical value 40 of parameter preset normalizes is 0.8 (40/
50).Then, terminal can be weighted according to the clarity 0.8 after 0.4 pair of the first weight normalization, clear after being weighted
Clear degree, value are 0.32 (0.4*0.8).Meanwhile terminal can be according to the number of the parameter preset after 0.6 pair of the second weight normalization
Value 0.8 is weighted, the numerical value of the parameter preset after being weighted, and value is 0.48 (0.6*0.8).Then, terminal can be counted
Calculate A1 image weighting after clarity 0.32 and weighting after parameter preset numerical value 0.48 and value, as 0.8.
For A2 image, terminal can also the numerical value first to its clarity and parameter preset be normalized.For example, clear
Numerical value after 83 normalization of degree is 0.83 (83/100), and the numerical value after the numerical value 41 of parameter preset normalizes is 0.82 (41/50).
Then, terminal can be weighted according to the clarity 0.83 after 0.4 pair of the first weight normalization, clear after being weighted
Degree, value are 0.332 (0.4*0.83).Meanwhile terminal can be according to the number of the parameter preset after 0.6 pair of the second weight normalization
Value 0.82 is weighted, the numerical value of the parameter preset after being weighted, and value is 0.492 (0.6*0.82).Then, terminal can be with
Calculate A2 image weighting after clarity 0.332 and weighting after parameter preset numerical value 0.492 and value, as 0.824.
Similarly, it is 0.81 that terminal, which can calculate the clarity after the normalization of A3 image, the parameter preset after normalization
Numerical value be 0.82.It is 0.324 according to the clarity after the weighting of the first weight 0.4, according to default after the weighting of the second weight 0.6
The numerical value of parameter is 0.492, then the clarity 0.324 after the weighting of A3 image and the numerical value 0.492 of the parameter preset after weighting
It is 0.816 with value.
It is 0.79 that terminal, which can calculate the clarity after the normalization of A4 image, the numerical value of the parameter preset after normalization
It is 0.78.It is 0.316 according to the clarity after the weighting of the first weight 0.4, according to the parameter preset after the weighting of the second weight 0.6
Numerical value is 0.468, then the numerical value 0.468 of the parameter preset after the clarity 0.316 and weighting after the weighting of A4 image is with value
0.784。
In step S208, terminal is by first group of image, and the maximum image of value is determined as the first image, and from
The second image is determined in two groups of images, and first image and second image are the images of synchronous acquisition.
For example, the number of the parameter preset after clarity and weighting in obtaining first group of image after the weighting of each frame image
Value both and value after, terminal can will be determined as the first image with the maximum image of value.
For example, due to A1, A2, A3, A4 and value be followed successively by 0.8,0.824,0.816,0.784.Therefore, terminal can incite somebody to action
A2 in first group of image is determined as the first image.
Then, terminal can determine the second image from second group of image.Second image can be and the first image
The image of synchronous acquisition.So, the B2 in second group of image can be determined as the second image by terminal.
In step S209, terminal executes following steps parallel: obtaining the depth of field according to first image and second image
Information, and noise reduction process is carried out to first image according to first group of image and obtains target image.
For example, terminal can be obtained according to first image and the second image after determining the first image and the second image
Take depth of view information.It is understood that the first image and the second image are double camera modules in terminal from different location (angle
Degree) collected same target image, therefore depth of view information can be obtained according to the first image and the second image.It needs to illustrate
, it is the scape got after determining focusing object which, which is for the focusing object in image,
Deeply convince breath.
Then, terminal can carry out noise reduction process to the first image according to first group of image, to obtain target image.Than
Such as, terminal can carry out noise reduction process to first image according to other images in first group of image in addition to the first image.Example
Such as, the first image is A2 image, then terminal can be using the first image A2 as the basic frame of noise reduction process, and according to first group
This 3 frame image of A1, A3, A4 in image carries out noise reduction process to A2.That is, terminal can be known according to this 3 frame image of A1, A3, A4
Not and the random noise in basic frame A2 image is reduced, to obtain the target image by noise reduction.
Wherein, the step of depth of view information being obtained according to the first image and the second image, with according to first group of image to this
First image carries out noise reduction process and can execute parallel the step of obtaining target image.It should be noted that the first image
Noise reduction process is carried out to have no effect on according to the first image and the second image acquisition depth of view information, therefore noise reduction process and the acquisition depth of field
The step of information, can execute parallel.
For example, in one embodiment, terminal can use central processing unit (Central Processing Unit,
CPU) the step of depth of view information is obtained according to the first photo and the second photo is executed, while terminal can use graphics processor
(Graphics Processing Unit, GPU) carries out noise reduction process to first image according to first group of image to execute
And the step of obtaining target image.
It is understood that above-mentioned two step executes parallel, the processing time of terminal can be saved, improve to image into
The efficiency of row processing.In some embodiments, terminal obtains the when a length of 800ms that depth of view information needs, and when noise reduction needs
A length of 400ms, as a result, by will acquire depth of view information and noise reduction parallel processing (for example, can be by multi-threading parallel process reality
It is existing), the processing time of 400ms can be saved, the image taking speed of terminal can be promoted.In addition, in some embodiments, in a line
In the 800ms time that Cheng Jinhang depth of view information obtains, another thread is other than carrying out noise reduction (being about 400ms when needing), also
The processing such as U.S. face processing (200ms is about when needing), filter processing (100ms is about when needing) can be carried out, thus in the depth of field
When acquisition of information is completed, target image is more handled, saves more processing times, further promotion terminal at
As speed.
In other embodiments, other than obtaining depth of view information according to the first image and the second image, in two frames
In the case that difference between the acquisition time interval of image is short enough or collected each frame image is sufficiently small, terminal can be with
A frame image is arbitrarily chosen from first group of image, and from second group of image choose with the frame picture synchronization collection to figure
Picture obtains depth of view information further according to this two field pictures.For example, the first image is A2 in the present embodiment, the second image is B2.That
, in other embodiments, terminal can also arbitrarily choose a frame image from A1, A3, A4, for example choose A4 image,
The B4 image with A4 picture synchronization collection is chosen from second group of image again, and depth of view information is obtained according to A4 and B4.
In addition, the difference between the acquisition time interval of two field pictures is short enough or collected each frame image is sufficiently small
In the case where, terminal can also arbitrarily choose a frame image from first group of image, and arbitrarily choose one from second group of image
Frame image, and depth of view information is obtained according to this two field pictures.For example, terminal chooses A2 and B3, and obtained according to this two field pictures
Depth of view information.
In one embodiment, first group of image includes at least two field pictures, and terminal is according to first group picture in S209
As the step of carrying out noise reduction process to first image, obtain target image may include steps of:
Terminal is by all image alignments in first group of image;
In the image of alignment, terminal is determined to belong in pixel that the pixel that multiple groups are mutually aligned and each group are mutually aligned
In the object pixel of the first image;
Terminal obtains the pixel value of each pixel in the pixel that each group is mutually aligned;
According to the pixel value of each pixel, terminal obtains the pixel value mean value for the pixel that each group is mutually aligned;
The pixel value of object pixel in first image is adjusted to the pixel value mean value by terminal, obtains the target
Image.
For example, first group of image includes A1, A2, A3, A4, wherein the first image is A2.So, terminal can determine A2
For the basic frame of noise reduction process.Then, terminal can use image alignment algorithm, by this 4 frame image alignment of A1, A2, A3, A4.
After by this 4 frame image alignment of A1, A2, A3, A4, the pixel being mutually aligned can be determined as one group of association by terminal
Pixel, to obtain the pixel that multiple groups are mutually aligned.Then, terminal can will belong to first in pixel that each group is mutually aligned
The pixel of image is determined as object pixel.Later, in the available each group of pixel being mutually aligned of terminal each pixel picture
Element value, and the pixel value mean value for the pixel that each group is mutually aligned is obtained in turn.Then, terminal can will be every in the first image
The pixel value of one object pixel is adjusted to the pixel value mean value of the group where the object pixel, and the first image adjusted is mesh
Logo image.
There is pixel X3, A4 an image on pixel X2, A3 an image for example, having on A1 image and having on pixel X1, A2 an image
On have a pixel X4.Also, by image alignment algorithm it is found that pixel X1, X2, X3 and X4 are this 4 frame images of A1, A2, A3, A4
The pixel of same aligned position is in after alignment.That is, pixel X1, X2, X3 and X4 are alignment.For example, the pixel value of pixel X1
It is 101, the pixel value of pixel X2 is 102, and the pixel value of pixel X3 is 103, and the pixel value of pixel X4 is 104, then this four
The average value of pixel value is 102.5.After obtaining the average value 102.5, terminal can be by the pixel of the pixel X2 in A2 image
Value is adjusted to 102.5 by 102, to carry out noise reduction process to pixel X2.Similarly, in A2 image it is all in A1, A3 and
After the pixel value of position on A4 image with snap to pixels is adjusted to corresponding average value, obtained image is noise reduction process
Target image afterwards.
In one embodiment, clarity maximum one first can also be determined from this 4 frame image of A1, A2, A3, A4
Then frame image assigns different weights to the pixel value of different frame, further according to the calculated for pixel values average value after weighting, and root
The pixel value on basic frame A2 is adjusted according to the average value after the weighting.
For example, on pixel Z3 and A4 image on pixel Z1, A3 image on pixel Z2 and A1 image on A2 image
Pixel Z4 alignment.Wherein, the pixel value that the pixel value of Z1 is 101, the pixel value of Z2 is 102, Z3 is the pixel value of 103, Z4
It is 104.Also, Z2 clarity in this 4 frame image is maximum.So, when calculating weighted mean, terminal can assign the picture of Z2
Element value weight is 0.4, and the pixel value weight of Z1, Z3, Z4 are 0.2, then weighted mean is 102.4, wherein 102.4=102*
0.4+(101+103+104)*0.2.After obtaining the weighted mean 102.4, terminal can be by the pixel value of Z2 by 102 adjustment
It is 102.4, to reduce the noise of the pixel.
In one embodiment, corresponding on A1, A2, A3 and A4 image if for some aligned position
Pixel value difference is larger, then terminal can not be adjusted the pixel value of the position on A2 image.For example, on A2 image
Pixel Y2 and A1 image on pixel Y1, A3 image on pixel Y3 and A4 image on pixel Y4 alignment, but the picture of Y2
Element value be 100, Y1 pixel value be 20, the pixel value of Y3 is 30, the pixel value of Y4 is 35, i.e., the pixel value of Y2 much larger than Y1,
Y3 and Y4.In such a case, it is possible to be not adjusted to the pixel value of Y2.
In one embodiment, if this 4 frame image of A1, A2, A3, A4 can not be aligned, terminal can not be to base
The pixel value of each pixel of plinth frame A2 is adjusted, and directlys adopt basic frame A2 as target image, carries out subsequent background
Virtualization processing.
In step S210, according to the depth of view information, terminal carries out background blurring processing to the target image.
For example, terminal can carry on the back the target image according to the depth of view information got after obtaining target image
Scape virtualization processing.
It should be noted that terminal can use first group of image and carry out multiframe noise reduction to the first image in the present embodiment
Processing, its random noise of obtained target image are few.Also, the first image and the second figure that terminal can be arrived according to synchronous acquisition
As obtaining depth of view information, therefore the depth of view information that terminal is got is more acurrate.It is i.e. less in the noise of target image, it gets
In the more accurate situation of depth of view information, it is also more preferable that the present embodiment carries out background blurring effect to the target image, i.e., background is empty
Target image its imaging effect after change is good.
Further, since using first group of image to the first image carry out multiframe noise reduction the step of can with according to the first image
The step of obtaining depth of view information with the second image executes parallel, therefore the present embodiment can also improve image processing speed, effectively
The slow problem of processing speed caused by avoiding because of multiframe noise reduction.
In one embodiment, in S209, terminal carries out noise reduction process to first image according to first group of image
The step of obtaining target image may include steps of:
Terminal carries out noise reduction process to the first image according to first group of image, the image after obtaining noise reduction;
Terminal carries out tone mapping processing to the image after the noise reduction, obtains target image.
For example, the first image be A2 image, then terminal can using the first image A2 as the basic frame of noise reduction process, and
According to this 3 frame image recognition of A1, A3, A4 and the random noise in basic frame A2 image is reduced, to obtain the image after noise reduction.
Then, terminal can carry out tone mapping processing (Tone Mapping) to the image after the noise reduction, to obtain
Target image.
It is understood that carrying out tone mapping processing to the image after the noise reduction can be improved the image comparison of image
Degree, so that target image has higher dynamic range, imaging effect is more preferable.
In another embodiment, terminal determines first from first group of image after getting first group of image
The step of image, also may include steps of:
Terminal obtains the clarity of each frame image in first group of image, and by the maximum figure of clarity in each frame image
As being determined as the first image.
For example, the clarity of A1, A2, A3, A4 are followed successively by 80,83,81,79 in first group of image.So, terminal can be straight
It connects and A2 image is determined as the first image.That is, terminal can be determined from first group of image according only to this dimension of clarity
First image.
In yet another embodiment, terminal also may include steps of after getting first group of image
If each frame image of first group of image includes face, terminal obtains the pre- of each frame image in first group of image
The numerical value of setting parameter, the numerical value of the parameter preset are used to indicate the eyes size of face in image;
The maximum image of numerical value of parameter preset in first group of image is determined as the first image.
For example, terminal detects that each frame image in first group of image includes people after getting first group of image
Face, then in the available first group of image of terminal the parameter preset of each frame image numerical value, the wherein number of the parameter preset
Value is for indicating the eyes size of face in image, and then, terminal can be directly by the number of parameter preset in first group of image
It is worth maximum image and is determined as the first image.
For example, the numerical value of the parameter preset of A1, A2, A3, A4 is followed successively by 40,41,42,39 in first group of image.So, eventually
A2 directly can be determined as the first image by end.It is understood that A2 image be in first group of image human eye it is maximum that
Frame image.That is, terminal can determine the first figure from first group of image according only to human eye size this dimension in image
Picture.
In one embodiment, in addition to can be according to the dimension of image definition and human eye size, from first group of image
In determine outside the first image, this dimension of the smile degree of face can also be added in terminal, to determine the first image.For example,
The smile degree of image definition and face can be combined together by terminal, to determine the first image.Alternatively, terminal can also be with
The smile degree of human eye size and face is combined together, to determine the first image.Or terminal can also be clear by image
The smile degree of clear degree, human eye size and face is combined together, to determine first image, etc..
In some embodiments, the mode for detecting face smile degree can be according to the tooth and mouth to face part
The image recognition at the positions such as angle carries out.For example, terminal can identify corners of the mouth part in image and the corners of the mouth part
Bending amplitude, bending degree is bigger it is considered that smile degree is bigger, etc..
Fig. 3 to Fig. 5 is please referred to, Fig. 3 to Fig. 5 is the scene and processing of the processing method of image provided by the embodiments of the present application
Flow diagram.
For example, double camera modules 10 include the first camera shooting mould as shown in figure 3, being equipped with double camera modules 10 in terminal
Group 11 and the second camera module 12.For example, the first camera module 11 camera, the second camera module can be taken the photograph based on for pair
As head.In one embodiment, two cameras in double camera modules can be arrangement (as shown in Figure 3) laterally side by side.
In another embodiment, two cameras in double camera modules are also possible to longitudinal arranged in parallel.When terminal use is double
When camera module 10 acquires image, first camera module 11 and the second camera module 12 can be with synchronous acquisition images.
For example, user opens camera applications, and prepares to shoot photo, terminal interface enters image preview interface at this time.
The image being used for for user's preview will be shown on the display screen of terminal.
When terminal is using double camera module acquisition images, the first camera module and the second camera module can be with synchronous acquisitions
Image.
Later, user clicks button of taking pictures, as shown in Figure 4.In the present embodiment, it takes pictures when detecting that user clicks
After button, terminal can obtain before user clicks button of taking pictures from buffer queue, and the first camera module 11 collects recently
4 frame images and the nearest collected 4 frame image of the second camera module 12.For example, the first camera module is collected recently
4 photograph frames (first group of image) are followed successively by A1, A2, A3, A4.The 4 frame images (second that the nearest synchronous acquisition of second camera module arrives
Group image) it is followed successively by B1, B2, B3, B4.It is understood that A1 and B1 are the images of synchronous acquisition, A2 and B2 are synchronous acquisitions
Image, A3 and B3 are the images of synchronous acquisition, and A4 and B4 are the images of synchronous acquisition.
Then, the clarity of each frame image and the numerical value of parameter preset in the available first group of image of terminal.Its
In, the numerical value of parameter preset can be used to indicate that the eyes size of face in image.For example, the value range of clarity be 0~
100, the numerical value of clarity is bigger, and expression image is more clear.In first group of image the clarity of A1, A2, A3, A4 be followed successively by 80,
83,81,79.The value range of the numerical value of parameter preset is 0~50, and human eye is bigger in the bigger expression image of numerical value.First group picture
The numerical value of the parameter preset of A1, A2, A3, A4 is followed successively by 40,41,42,39 as in.
Later, available the first weight corresponding with clarity of terminal, and the second weight corresponding with parameter preset.
For example, the first weight is 0.4, the second weight is 0.6.
Then, for each frame image in first group of image, terminal can be to the numerical value of its clarity and parameter preset
It is normalized, the numerical value of the parameter preset after clarity and normalization after obtaining each frame image normalization.Then, terminal can
It is clear after obtaining each frame image weighting to be weighted respectively to the clarity after each frame image normalization according to first weight
Clear degree.Also, terminal respectively can add the numerical value of the parameter preset after each frame image normalization according to second weight
Power, the numerical value of the parameter preset after obtaining each frame image weighting.Finally, terminal can obtain respectively it is clear after the weighting of each frame image
Clear degree and both the numerical value of parameter preset after weighting and value.
For example, for A1 image, terminal can the numerical value first to its clarity and parameter preset be normalized.For example,
Numerical value after clarity 80 normalizes is 0.8 (80/100), and the numerical value after the numerical value 40 of parameter preset normalizes is 0.8 (40/
50).Then, terminal can be weighted according to the clarity 0.8 after 0.4 pair of the first weight normalization, clear after being weighted
Clear degree, value are 0.32 (0.4*0.8).Meanwhile terminal can be according to the number of the parameter preset after 0.6 pair of the second weight normalization
Value 0.8 is weighted, the numerical value of the parameter preset after being weighted, and value is 0.48 (0.6*0.8).Then, terminal can be counted
Calculate A1 image weighting after clarity 0.32 and weighting after parameter preset numerical value 0.48 and value, as 0.8.
Similarly, for A2 image, the clarity after weighting is 0.332, the numerical value of parameter preset after weighting is
0.492, both is 0.824 with value.For A3 image, the clarity after weighting is 0.324, the parameter preset after weighting
Numerical value be 0.492, both and value be 0.816.For A4 image, the clarity after weighting is 0.316, after weighting
The numerical value of parameter preset is 0.468, both is 0.784 with value.
Obtain A1, A2, A3, A4 this 4 frame image and after value, terminal can will be determined as first with the maximum image of value
Image, first image is for the basic frame as noise reduction.It is understood that the first image is human eye in first group of image
The higher image of larger and clarity.For example, due to A2's and being worth maximum, A2 is determined as the first image.Then, eventually
The B2 image that second camera module is shot can be determined as the second image by end.
Then, terminal can use CPU, according to the first image A2 and the second image B2, obtain depth of view information.Also, eventually
End can use GPU, noise reduction process be carried out to the first image A2 according to A1, A3, A4 in first group of image, after obtaining noise reduction
A2 image, and determine it as target image.Wherein, terminal calculates the step of depth of view information and can to the step of A2 image noise reduction
To execute parallel, to improve processing speed.
Later, terminal can carry out background blurring processing to the target image according to the depth of view information got, thus
To output image.Then, which can be stored in photograph album by terminal.Entire process flow can be as shown in Figure 5.
Referring to Fig. 6, Fig. 6 is the structural schematic diagram of the processing unit of image provided by the embodiments of the present application.The place of image
Reason device 300 may include: the first acquisition module 301, determining module 302, and second obtains module 303, noise reduction module 304, with
And processing module 305.
First obtains module 301, and for obtaining first group of image and second group of image, first group picture seems by described
The image of first camera module acquisition, second group picture seem the image acquired by second camera module.
For example, the first acquisition module 301 can first obtain the acquisition of the first camera module in double camera modules in terminal
First group of image and double camera modules in the second camera module acquisition second group of image.In first group of image
It also include multiple image in second group of image comprising multiple image.
Determining module 302, for determining the first image from first group of image, and from second group of image
Determine that the second image, the first image and second image are the images of synchronous acquisition.
For example, first group of image A1, A2, A3, A4 of the acquisition of the first camera module are got in the first acquisition module 301,
And second camera module acquisition second group of image B1, B2, B3, B4 after, determining module 302 can be from A1, A2, A3, A4
It determines the first image, the second image is then determined from B1, B2, B3, B4.Wherein, which can be and first
The image of picture synchronization collection.
For example, the A2 in A1, A2, A3, A4 is determined as the first image by determining module 302, then terminal can be correspondingly
B2 is determined as the second image.
Second obtains module 303, for obtaining depth of view information according to the first image and second image.
For example, second obtains module 303 can basis after determining module 302 determines the first image and the second image
First image and the second image obtain depth of view information.It is understood that since first image and the second image are terminals
On double camera modules from different camera sites (angle) synchronous acquisition to image, therefore can be according to the first image and second
Image obtains depth of view information.
It should be noted that the depth of view information is for the focusing object in image, it is to determine to focus
The depth of view information got after object.
Noise reduction module 304 obtains target for carrying out noise reduction process to the first image according to first group of image
Image.
For example, noise reduction module 304 can carry out noise reduction process to the first image according to first group of image, to obtain target
Image.For example, noise reduction module 304 can be according to other images in first group of image in addition to the first image, to first image
Carry out noise reduction process.For example, the first image is A2 image, then terminal can be using the first image A2 as the basis of noise reduction process
Frame, and noise reduction process is carried out to A2 according to this 3 frame image of A1, A3, A4 in first group of image.That is, noise reduction module 304 can root
According to this 3 frame image recognition of A1, A3, A4 and the random noise in basic frame A2 image is reduced, to obtain the target by noise reduction
Image.
Processing module 305, for carrying out default processing to the target image according to the depth of view information.
For example, processing module 305 can be according to the depth of view information got, to the target figure after obtaining target image
As carrying out default processing.In some embodiments, which can be such as background blurring and image 3D application
Processing etc..
In one embodiment, the processing module 305 can be used for: carry out background blurring place to the target image
Reason.
In one embodiment, the determining module 302 can be used for: obtain each frame image in first group of image
Clarity;The maximum image of clarity in each frame image is determined as the first image.
In one embodiment, the determining module 302 can be used for: if each frame image packet of first group of image
Containing face, then the numerical value of the parameter preset of each frame image in first group of image is obtained, the numerical value of the parameter preset is used for
Indicate the eyes size of face in image;The maximum image of numerical value of parameter preset in each frame image is determined as first figure
Picture.
In one embodiment, the determining module 302 can be used for: obtain each frame image in first group of image
Clarity;If each frame image of first group of image includes face, each frame image in first group of image is obtained
The numerical value of parameter preset, the numerical value of the parameter preset are used to indicate the eyes size of face in image;According to each frame figure
The clarity of picture and the numerical value of parameter preset determine the first image from first group of image.
In one embodiment, the determining module 302 can be used for: the first weight corresponding with clarity is obtained,
And the second weight corresponding with parameter preset;The clarity of each frame image is added respectively according to first weight
Power, the clarity after obtaining each frame image weighting, and each frame image is preset respectively according to second weight
The numerical value of parameter is weighted, the numerical value of the parameter preset after obtaining each frame image weighting;Each frame figure is obtained respectively
The numerical value of the parameter preset after clarity and weighting after the weighting of picture and value;By in first group of image, and value is maximum
Image be determined as the first image.
In one embodiment, the determining module 302 can be used for: respectively to the clarity of each frame image and
The numerical value of parameter preset is normalized, the parameter preset after clarity and normalization after obtaining each frame image normalization
Numerical value;According to first weight, the clarity after the normalization of each frame image is weighted respectively, is obtained described
Clarity after each frame image weighting;According to second weight, respectively to the default ginseng after the normalization of each frame image
Several numerical value are weighted, the numerical value of the parameter preset after obtaining each frame image weighting.
In one embodiment, the noise reduction module 304 can be used for: according to first group of image to described first
Image carries out noise reduction process, the image after obtaining noise reduction;Tone mapping processing is carried out to the image after the noise reduction, is obtained described
Target image.
In one embodiment, first group of image includes at least two field pictures, and the noise reduction module 304 can be used
In: by all image alignments in first group of image;In the image of alignment, the pixel that multiple groups are mutually aligned is determined,
And belong to the object pixel of the first image in the pixel that is mutually aligned of each group;Obtain each pixel in the pixel that each group is mutually aligned
Pixel value;According to the pixel value of each pixel, the pixel value mean value for the pixel that each group is mutually aligned is obtained;By described
The pixel value of object pixel in one image is adjusted to the pixel value mean value, obtains the target image.
About the device in the embodiment, wherein modules execute the concrete mode of operation in related this method
It is described in detail in embodiment, no detailed explanation will be given here.
The embodiment of the present application provides a kind of computer-readable storage medium, computer program is stored thereon with, when described
When computer program executes on computers, so that the computer executes in the processing method such as image provided in this embodiment
The step of.
The embodiment of the present application also provides a kind of electronic equipment, including memory, and processor, the processor is by calling institute
The computer program stored in memory is stated, the step in processing method for executing image provided in this embodiment.
For example, above-mentioned electronic equipment can be the mobile terminals such as tablet computer or smart phone.Referring to Fig. 7,
Fig. 7 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
The mobile terminal 400 may include the components such as camera module 401, memory 402, processor 403.Art technology
Personnel are appreciated that mobile terminal structure shown in Fig. 7 does not constitute the restriction to mobile terminal, may include than illustrating more
More or less component perhaps combines certain components or different component layouts.
Camera module 401 can be the double camera modules etc. installed on mobile terminal.Wherein the camera module 401 is at least
Including the first camera module and the second camera module, when mobile terminal is using double camera module acquisition images, first camera shooting
Mould group and second camera module can be with synchronous acquisition images.
Memory 402 can be used for storing application program and data.It include that can hold in the application program that memory 402 stores
Line code.Application program can form various functional modules.Processor 403 is stored in the application journey of memory 402 by operation
Sequence, thereby executing various function application and data processing.
Processor 403 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the application program being stored in memory 402, and is called and is stored in memory 402
Data execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.
In the present embodiment, the processor 403 in mobile terminal can be according to following instruction, will be one or more
The corresponding executable code of the process of application program is loaded into memory 402, and is run by processor 403 and be stored in storage
Application program in device 402, to realize following steps:
First group of image and second group of image are obtained, first group picture seems the figure acquired by first camera module
Picture, second group picture seem the image acquired by second camera module;First is determined from first group of image
Image, and the second image is determined from second group of image, the first image and second image are synchronous acquisitions
Image;Depth of view information is obtained according to the first image and second image;According to first group of image to described
One image carries out noise reduction process, obtains target image;According to the depth of view information, default processing is carried out to the target image.
The embodiment of the present invention also provides a kind of electronic equipment.It include image processing circuit in above-mentioned electronic equipment, at image
Reason circuit can use hardware and or software component realization, it may include define image signal process (Image Signal
Processing) the various processing units of pipeline.Image processing circuit at least may include: camera, image-signal processor
(Image Signal Processor, ISP processor), control logic device, video memory and display etc..Wherein image
Head at least may include one or more lens and imaging sensor.
Imaging sensor may include colour filter array (such as Bayer filter).Imaging sensor can obtain and use image sensing
The luminous intensity and wavelength information that each imaging pixel of device captures, and it is original to provide one group for being handled by image-signal processor
Image data.
Image-signal processor can handle raw image data pixel by pixel in various formats.For example, each image
Pixel can have the bit depth of 8,10,12 or 14 bits, and image-signal processor can carry out one or more to raw image data
The statistical information of a image processing operations, collection about image data.Wherein, image processing operations can be by identical or different position
Depth accuracy carries out.Raw image data can be stored into video memory after image-signal processor is handled.Image letter
Number processor can also receive image data from video memory.
Video memory dedicated can be deposited for independent in a part, storage equipment or electronic equipment of memory device
Reservoir, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the image data from video memory, image-signal processor can carry out one or more images
Processing operation, such as time-domain filtering.Image data that treated can be transmitted to video memory, another to carry out before shown
Outer processing.Image-signal processor can also receive processing data from video memory, and carry out to the processing data original
Image real time transfer in domain and in RGB and YCbCr color space.Treated, and image data may be output to display, with
It watches for user and/or is further located by graphics engine or GPU (Graphics Processing Unit, graphics processor)
Reason.In addition, the output of image-signal processor also can be transmitted to video memory, and display can read from video memory and scheme
As data.In one embodiment, video memory can be configured to realize one or more frame buffers.
The statistical data that image-signal processor determines can be transmitted to control logic device.For example, statistical data may include certainly
The statistics of the imaging sensors such as dynamic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, correcting lens shadow
Information.
Control logic device may include the processor and/or microcontroller for executing one or more routines (such as firmware).One
Or multiple routines can statistical data based on the received, determine the control parameter and ISP control parameter of camera.For example, camera shooting
The control parameter of head may include camera flash control parameter, the control parameter of lens (such as focus or zoom focal length) or
The combination of these parameters.ISP control parameter may include for automatic white balance and color adjustment (for example, during RGB processing)
Gain level and color correction matrix etc..
Referring to Fig. 8, Fig. 8 is the structural schematic diagram of image processing circuit in the present embodiment.As shown in figure 8, for convenient for saying
It is bright, the various aspects of image processing techniques relevant to the embodiment of the present invention are only shown.
Image processing circuit may include: the first camera 510, second camera 520, the first image-signal processor
530, the second image-signal processor 540, control logic device 550, video memory 560, display 570.Wherein, the first camera shooting
First 510 may include one or more first lens 511 and the first imaging sensor 512.Second camera 520 may include one
A or multiple second lens 521 and the second imaging sensor 522.
First image transmitting of the first camera 510 acquisition is handled to the first image-signal processor 530.First figure
After handling the first image as signal processor 530, can by the statistical data of the first image (brightness of such as image, image contrast
Value, color of image etc.) it is sent to control logic device 550.Control logic device 550 can determine the first camera according to statistical data
510 control parameter, so that the first camera 510 can carry out the operation such as auto-focusing, automatic exposure according to control parameter.First
Image can store after the first image-signal processor 530 is handled into video memory 560.At first picture signal
Reason device 530 can also read the image stored in video memory 560 to be handled.In addition, the first image is believed by image
Number processor 530 can be sent directly to display 570 after being handled and be shown.Display 570 can also read image and deposit
Image in reservoir 560 is to be shown.
The second image transmitting that second camera 520 acquires is handled to the second image-signal processor 540.Second figure
After handling the second image as signal processor 540, can by the statistical data of the second image (brightness of such as image, image contrast
Value, color of image etc.) it is sent to control logic device 550.Control logic device 550 can determine second camera according to statistical data
520 control parameter, so that second camera 520 can carry out the operation such as auto-focusing, automatic exposure according to control parameter.Second
Image can store after the second image-signal processor 540 is handled into video memory 560.At second picture signal
Reason device 540 can also read the image stored in video memory 560 to be handled.In addition, the second image is believed by image
Number processor 540 can be sent directly to display 570 after being handled and be shown.Display 570 can also read image and deposit
Image in reservoir 560 is to be shown.
In other embodiments, the first image-signal processor and the second image-signal processor can also synthesize system
One image-signal processor handles the data of the first imaging sensor and the second imaging sensor respectively.
In addition, not having displaying in figure, electronic equipment can also include CPU and power supply module.CPU and logic controller,
First image-signal processor, the second image-signal processor, video memory and display are all connected with, and CPU is for realizing complete
Office's control.Power supply module is used to power for modules.
In general, having the mobile phone of double camera modules, under certain photographing modes, double camera modules work.At this point,
It is that the first camera and second camera are powered that CPU, which controls power supply module,.Imaging sensor in first camera powers on, and second
Imaging sensor in camera powers on, so as to realize the acquisition conversion of image.Under certain photographing modes, it can be double
Camera work in camera module.For example, only focal length camera works.In this case, CPU controls power supply module and gives
The imaging sensor power supply of corresponding camera.In embodiments herein, at depth of field calculating to be carried out and virtualization
Reason, it is therefore desirable to what two camera shooting die heads worked at the same time.
In addition, double camera modules determining and shooting effect can be determined in the mounting distance of terminal according to the size of terminal.?
In some embodiments, in order to which the overlapped object degree for shooting the first camera module and the second camera module is high, two can be taken the photograph
As mould group be mounted so as to it is more closer better, such as within 10mm.
The step of the following are the processing methods that image provided in this embodiment is realized with image processing techniques in Fig. 8:
First group of image and second group of image are obtained, first group picture seems the figure acquired by first camera module
Picture, second group picture seem the image acquired by second camera module;First is determined from first group of image
Image, and the second image is determined from second group of image, the first image and second image are synchronous acquisitions
Image;Depth of view information is obtained according to the first image and second image;According to first group of image to described
One image carries out noise reduction process, obtains target image;According to the depth of view information, default processing is carried out to the target image.
In one embodiment, when electronic equipment executes the step for carrying out default processing to the target image,
It can execute: background blurring processing is carried out to the target image.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes described from institute
It states when determining the step of the first image in first group of image, can execute: obtaining each frame image in first group of image
Clarity;The maximum image of clarity in each frame image is determined as the first image.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes described from institute
It states when determining the step of the first image in first group of image, can execute: if each frame image of first group of image includes
Face, then obtain the numerical value of the parameter preset of each frame image in first group of image, and the numerical value of the parameter preset is used for table
The eyes size of face in diagram picture;The maximum image of numerical value of parameter preset in each frame image is determined as first figure
Picture.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes described from institute
It states when determining the step of the first image in first group of image, can execute: obtaining each frame image in first group of image
Clarity;If each frame image of first group of image includes face, the pre- of each frame image in first group of image is obtained
The numerical value of setting parameter, the numerical value of the parameter preset are used to indicate the eyes size of face in image;According to each frame image
Clarity and parameter preset numerical value, determine the first image from first group of image.
In one embodiment, electronic equipment executes the clarity according to each frame image and parameter preset
It when numerical value determines the step of the first image from first group of image, can execute: obtain corresponding with clarity first
Weight, and the second weight corresponding with parameter preset;According to first weight respectively to the clarity of each frame image
It is weighted, the clarity after obtaining each frame image weighting, and according to second weight respectively to each frame image
The numerical value of parameter preset be weighted, the numerical value of the parameter preset after obtaining each frame image weighting;Described in obtaining respectively
The numerical value of the parameter preset after clarity and weighting after the weighting of each frame image and value;By in first group of image, and
It is worth maximum image and is determined as the first image.
In one embodiment, electronic equipment execute it is described according to first weight respectively to each frame image
Clarity is weighted to obtain the clarity after each frame image weighting, and according to second weight respectively to each frame
It, can when the numerical value of the parameter preset of image is weighted to obtain the step of the numerical value of the parameter preset after each frame image weighting
To execute: the numerical value of the clarity to each frame image and parameter preset is normalized respectively, obtains each frame image
The numerical value of the parameter preset after clarity and normalization after normalization;According to first weight, respectively to each frame figure
Clarity after the normalization of picture is weighted, the clarity after obtaining each frame image weighting;According to second weight,
The numerical value of the parameter preset after the normalization of each frame image is weighted respectively, after obtaining each frame image weighting
The numerical value of parameter preset.
In one embodiment, it is described according to the first image and second image obtain depth of view information the step of
It is by electronics with described the step of obtaining target image to the first image progress noise reduction process according to first group of image
What equipment executed parallel.
In one embodiment, electronic equipment execution is described carries out the first image according to first group of image
It when noise reduction process obtains the step of target image, can execute: the first image being dropped according to first group of image
It makes an uproar processing, the image after obtaining noise reduction;Tone mapping processing is carried out to the image after the noise reduction, obtains the target image.
In one embodiment, first group of image includes at least two field pictures, and electronic equipment executes the basis
It when first group of image obtains the step of target image to the first image progress noise reduction process, can execute: will be described
All image alignments in first group of image;In the image of alignment, determine that pixel that multiple groups are mutually aligned and each group are mutual
Belong to the object pixel of the first image in the pixel of alignment;Obtain the pixel value of each pixel in the pixel that each group is mutually aligned;
According to the pixel value of each pixel, the pixel value mean value for the pixel that each group is mutually aligned is obtained;It will be in the first image
The pixel value of object pixel be adjusted to the pixel value mean value, obtain the target image.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, it may refer to the detailed description of the processing method above with respect to image, details are not described herein again.
The processing method category of image in the processing unit and foregoing embodiments of described image provided by the embodiments of the present application
In same design, any provided in the processing method embodiment of described image can be run in the processing unit of described image
Method, specific implementation process are detailed in the processing method embodiment of described image, and details are not described herein again.
It should be noted that for the processing method of the embodiment of the present application described image, those of ordinary skill in the art
It is understood that realize all or part of the process of the processing method of the embodiment of the present application described image, being can be by computer journey
Sequence is completed to control relevant hardware, and the computer program can be stored in a computer-readable storage medium, such as deposit
Storage in memory, and is executed by least one processor, in the process of implementation may include such as the processing method of described image
The process of embodiment.Wherein, the storage medium can be magnetic disk, CD, read-only memory (ROM, Read Only
Memory), random access memory (RAM, Random Access Memory) etc..
For the processing unit of the described image of the embodiment of the present application, each functional module can integrate to be handled at one
In chip, it is also possible to modules and physically exists alone, can also be integrated in two or more modules in a module.
Above-mentioned integrated module both can take the form of hardware realization, can also be realized in the form of software function module.It is described
If integrated module is realized and when sold or used as an independent product in the form of software function module, also can store
In a computer readable storage medium, the storage medium is for example read-only memory, disk or CD etc..
A kind of processing method of image, device, storage medium and electronics provided by the embodiment of the present application are set above
Standby to be described in detail, used herein a specific example illustrates the principle and implementation of the invention, above
The explanation of embodiment is merely used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art
Member, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion this explanation
Book content should not be construed as limiting the invention.
Claims (10)
1. a kind of processing method of image, be applied to terminal, which is characterized in that the terminal include at least the first camera module and
Second camera module, which comprises
First group of image and second group of image are obtained, first group picture seems the image acquired by first camera module,
Second group picture seems the image acquired by second camera module;
The first image is determined from first group of image, and determines the second image from second group of image, it is described
First image and second image are the images of synchronous acquisition;
Depth of view information is obtained according to the first image and second image;
Noise reduction process is carried out to the first image according to first group of image, obtains target image;
According to the depth of view information, default processing is carried out to the target image;
Wherein, first group of image includes at least two field pictures;
Described the step of determining the first image from first group of image, comprising:
Obtain the clarity of each frame image in first group of image;The maximum image of clarity in each frame image is determined as institute
State the first image;
Alternatively, obtaining each frame image in first group of image if each frame image of first group of image includes face
The numerical value of parameter preset, the numerical value of the parameter preset are used to indicate the eyes size of face in image;It will be pre- in each frame image
The maximum image of the numerical value of setting parameter is determined as the first image;
Alternatively, obtaining the clarity of each frame image in first group of image;If each frame image of first group of image includes
Face, then obtain the numerical value of the parameter preset of each frame image in first group of image, and the numerical value of the parameter preset is used for table
The eyes size of face in diagram picture;According to the numerical value of the clarity of each frame image and parameter preset, from described first group
The first image is determined in image.
2. the processing method of image according to claim 1, which is characterized in that described to be preset to the target image
The step of processing, comprising:
Background blurring processing is carried out to the target image.
3. the processing method of image according to claim 1, which is characterized in that described according to the clear of each frame image
The step of numerical value of degree and parameter preset determines the first image from first group of image, comprising:
Obtain the first weight corresponding with clarity, and the second weight corresponding with parameter preset;
The clarity of each frame image is weighted respectively according to first weight, after obtaining each frame image weighting
Clarity, and the numerical value of the parameter preset of each frame image is weighted respectively according to second weight, obtains institute
The numerical value of parameter preset after stating each frame image weighting;
The numerical value of the parameter preset after clarity and weighting after obtaining the weighting of each frame image respectively and value;
By in first group of image, and the maximum image of value is determined as the first image.
4. the processing method of image according to claim 3, which is characterized in that described right respectively according to first weight
The clarity of each frame image is weighted to obtain the clarity after each frame image weighting, and according to second weight
The numerical value of the parameter preset of each frame image is weighted respectively to obtain the parameter preset after each frame image weights
The step of numerical value, comprising:
The numerical value of the clarity to each frame image and parameter preset is normalized respectively, obtains each frame image normalizing
The numerical value of the parameter preset after clarity and normalization after change;
According to first weight, the clarity after the normalization of each frame image is weighted respectively, is obtained described each
Clarity after the weighting of frame image;
According to second weight, the numerical value of the parameter preset after the normalization of each frame image is weighted respectively, is obtained
The numerical value of parameter preset after to each frame image weighting.
5. the processing method of image according to claim 1 or 2, which is characterized in that the described method includes:
It is described according to the first image and second image obtain depth of view information the step of and it is described according to described first group
Image carries out the step of noise reduction process obtains target image to the first image and executes parallel.
6. the processing method of image according to claim 1, which is characterized in that it is described according to first group of image to institute
It states the first image and carries out the step of noise reduction process obtains target image, comprising:
Noise reduction process is carried out to the first image according to first group of image, the image after obtaining noise reduction;
Tone mapping processing is carried out to the image after the noise reduction, obtains the target image.
7. the processing method of image according to claim 1, which is characterized in that first group of image includes at least two frames
Image;
It is described that the step of noise reduction process obtains target image is carried out to the first image according to first group of image, comprising:
By all image alignments in first group of image;
In the image of alignment, determines the pixel that multiple groups are mutually aligned and belong to the first figure in the pixel that each group is mutually aligned
The object pixel of picture;
Obtain the pixel value of each pixel in the pixel that each group is mutually aligned;
According to the pixel value of each pixel, the pixel value mean value for the pixel that each group is mutually aligned is obtained;
The pixel value of object pixel in the first image is adjusted to the pixel value mean value, obtains the target image.
8. a kind of processing unit of image, be applied to terminal, which is characterized in that the terminal include at least the first camera module and
Second camera module, described device include:
First obtains module, seems to be taken the photograph by described first for first group of image of acquisition and second group of image, first group picture
As the image that mould group acquires, second group picture seems the image acquired by second camera module;
Determining module is determined for determining the first image from first group of image, and from second group of image
Second image, the first image and second image are the images of synchronous acquisition;
Second obtains module, for obtaining depth of view information according to the first image and second image;
Noise reduction module obtains target image for carrying out noise reduction process to the first image according to first group of image;
Processing module, for carrying out default processing to the target image according to the depth of view information;
Wherein, first group of image includes at least two field pictures, and the determining module is used for:
Obtain the clarity of each frame image in first group of image;The maximum image of clarity in each frame image is determined as institute
State the first image;
Alternatively, obtaining each frame image in first group of image if each frame image of first group of image includes face
The numerical value of parameter preset, the numerical value of the parameter preset are used to indicate the eyes size of face in image;It will be pre- in each frame image
The maximum image of the numerical value of setting parameter is determined as the first image;
Alternatively, obtaining the clarity of each frame image in first group of image;If each frame image of first group of image includes
Face, then obtain the numerical value of the parameter preset of each frame image in first group of image, and the numerical value of the parameter preset is used for table
The eyes size of face in diagram picture;According to the numerical value of the clarity of each frame image and parameter preset, from described first group
The first image is determined in image.
9. a kind of storage medium, is stored thereon with computer program, which is characterized in that when the computer program on computers
When execution, so that the computer executes the method as described in any one of claims 1 to 7.
10. a kind of electronic equipment, including memory, processor, which is characterized in that the processor is by calling the memory
The computer program of middle storage, for executing the method as described in any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097896.8A CN108282616B (en) | 2018-01-31 | 2018-01-31 | Processing method, device, storage medium and the electronic equipment of image |
PCT/CN2018/122872 WO2019148997A1 (en) | 2018-01-31 | 2018-12-21 | Image processing method and device, storage medium, and electronic apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097896.8A CN108282616B (en) | 2018-01-31 | 2018-01-31 | Processing method, device, storage medium and the electronic equipment of image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108282616A CN108282616A (en) | 2018-07-13 |
CN108282616B true CN108282616B (en) | 2019-10-25 |
Family
ID=62807210
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810097896.8A Active CN108282616B (en) | 2018-01-31 | 2018-01-31 | Processing method, device, storage medium and the electronic equipment of image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108282616B (en) |
WO (1) | WO2019148997A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108282616B (en) * | 2018-01-31 | 2019-10-25 | Oppo广东移动通信有限公司 | Processing method, device, storage medium and the electronic equipment of image |
CN109862262A (en) * | 2019-01-02 | 2019-06-07 | 上海闻泰电子科技有限公司 | Image weakening method, device, terminal and storage medium |
CN116701675A (en) * | 2022-02-25 | 2023-09-05 | 荣耀终端有限公司 | Image data processing method and electronic equipment |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070189750A1 (en) * | 2006-02-16 | 2007-08-16 | Sony Corporation | Method of and apparatus for simultaneously capturing and generating multiple blurred images |
CN104780313A (en) * | 2015-03-26 | 2015-07-15 | 广东欧珀移动通信有限公司 | Image processing method and mobile terminal |
CN106878605B (en) * | 2015-12-10 | 2021-01-29 | 北京奇虎科技有限公司 | Image generation method based on electronic equipment and electronic equipment |
CN105827964B (en) * | 2016-03-24 | 2019-05-17 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN107613199B (en) * | 2016-06-02 | 2020-03-13 | Oppo广东移动通信有限公司 | Blurred photo generation method and device and mobile terminal |
CN107635093A (en) * | 2017-09-18 | 2018-01-26 | 维沃移动通信有限公司 | A kind of image processing method, mobile terminal and computer-readable recording medium |
CN108024054B (en) * | 2017-11-01 | 2021-07-13 | Oppo广东移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN108055452B (en) * | 2017-11-01 | 2020-09-18 | Oppo广东移动通信有限公司 | Image processing method, device and equipment |
CN108282616B (en) * | 2018-01-31 | 2019-10-25 | Oppo广东移动通信有限公司 | Processing method, device, storage medium and the electronic equipment of image |
-
2018
- 2018-01-31 CN CN201810097896.8A patent/CN108282616B/en active Active
- 2018-12-21 WO PCT/CN2018/122872 patent/WO2019148997A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2019148997A1 (en) | 2019-08-08 |
CN108282616A (en) | 2018-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110198417A (en) | Image processing method, device, storage medium and electronic equipment | |
WO2019085618A1 (en) | Image-processing method, apparatus and device | |
WO2019148978A1 (en) | Image processing method and apparatus, storage medium and electronic device | |
CN108391059A (en) | A kind of method and apparatus of image procossing | |
WO2019105305A1 (en) | Image brightness processing method, computer readable storage medium and electronic device | |
CN107948514B (en) | Image blurs processing method, device, mobile device and computer storage medium | |
CN109993722B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN110213502A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108282616B (en) | Processing method, device, storage medium and the electronic equipment of image | |
CN108024054A (en) | Image processing method, device and equipment | |
CN110445986A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108520493A (en) | Processing method, device, storage medium and the electronic equipment that image is replaced | |
CN108401110B (en) | Image acquisition method and device, storage medium and electronic equipment | |
CN110198418A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110012227B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
WO2019029573A1 (en) | Image blurring method, computer-readable storage medium and computer device | |
CN110266954A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108574803B (en) | Image selection method and device, storage medium and electronic equipment | |
CN110290325A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110198419A (en) | Image processing method, device, storage medium and electronic equipment | |
CN107948511B (en) | Brightness of image processing method, device, storage medium and brightness of image processing equipment | |
CN108259769B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN108520036B (en) | Image selection method and device, storage medium and electronic equipment | |
CN110278386A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108307114A (en) | Processing method, device, storage medium and the electronic equipment of image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |