CN108574803A - Choosing method, device, storage medium and the electronic equipment of image - Google Patents
Choosing method, device, storage medium and the electronic equipment of image Download PDFInfo
- Publication number
- CN108574803A CN108574803A CN201810277025.4A CN201810277025A CN108574803A CN 108574803 A CN108574803 A CN 108574803A CN 201810277025 A CN201810277025 A CN 201810277025A CN 108574803 A CN108574803 A CN 108574803A
- Authority
- CN
- China
- Prior art keywords
- image
- user
- pending
- target
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Abstract
This application discloses a kind of choosing method of image, device, storage medium and electronic equipments.This method includes:Obtain the pending image that multiframe includes face;Expression Recognition is carried out to the facial image of each user in the pending image, obtains the Expression Recognition result of each user;According to the Expression Recognition of each user as a result, choosing base image from the pending image.The present embodiment can improve flexibility when terminal chooses the image for being handled from multiple image.
Description
Technical field
The application belongs to image technique field more particularly to a kind of choosing method of image, device, storage medium and electronics
Equipment.
Background technology
It is a basic function of terminal to take pictures.With being constantly progressive for the hardware such as camera module and image processing algorithm,
The shooting function of terminal is stronger and stronger.User also more and more continually take pictures by using terminal, for example user often makes
With terminal taking personage photograph etc..In the related technology, terminal can acquire multiple image, then choose and be used for from this multiple image
The image handled.However, when choosing the image for being handled from multiple image, terminal chooses the flexible of image
Property is poor.
Invention content
The embodiment of the present application provides a kind of choosing method of image, device, storage medium and electronic equipment, can improve end
Hold flexibility when choosing the image for being handled from multiple image.
The embodiment of the present application provides a kind of choosing method of image, including:
Obtain the pending image that multiframe includes face;
Expression Recognition is carried out to the facial image of each user in the pending image, the expression for obtaining each user is known
Other result;
According to the Expression Recognition of each user as a result, choosing base image from the pending image.
The embodiment of the present application provides a kind of selecting device of image, including:
Acquisition module, the pending image for including face for obtaining multiframe;
Identification module carries out Expression Recognition for the facial image to each user in the pending image, obtains every
The Expression Recognition result of one user;
Choose module, for according to the Expression Recognition of each user as a result, choosing base from the pending image
Plinth image.
The embodiment of the present application provides a kind of storage medium, computer program is stored thereon with, when the computer program exists
When being executed on computer so that the computer executes the step in the choosing method of image provided by the embodiments of the present application.
The embodiment of the present application also provides a kind of electronic equipment, including memory, and processor, the processor is by calling institute
The computer program stored in memory is stated, the step in choosing method for executing image provided by the embodiments of the present application.
In the present embodiment, when needing to choose base image from the pending image of multiframe, terminal first can wait locating to this
Manage image in each user facial image carry out Expression Recognition, then the Expression Recognition further according to each user as a result, from
Base image is determined in the pending image.That is, the present embodiment can be chosen according to the expression of user from pending image
Go out base image.Therefore, the present embodiment can improve spirit when terminal chooses the image for being handled from multiple image
Activity.In addition, the present embodiment can also improve the imaging effect of the photo of terminal taking.
Description of the drawings
Below in conjunction with the accompanying drawings, it is described in detail by the specific implementation mode to the present invention, technical scheme of the present invention will be made
And advantage is apparent.
Fig. 1 is the flow diagram of the choosing method of image provided by the embodiments of the present application.
Fig. 2 is another flow diagram of the choosing method of image provided by the embodiments of the present application.
Fig. 3 to Fig. 5 is the schematic diagram of a scenario of the choosing method of image provided by the embodiments of the present application.
Fig. 6 is the structural schematic diagram of the selecting device of image provided by the embodiments of the present application.
Fig. 7 is another structural schematic diagram of the selecting device of image provided by the embodiments of the present application.
Fig. 8 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
Fig. 9 is another structural schematic diagram of mobile terminal provided by the embodiments of the present application.
Specific implementation mode
Schema is please referred to, wherein identical component symbol represents identical component, the principle of the present invention is to implement one
It is illustrated in computing environment appropriate.The following description be based on illustrated by the specific embodiment of the invention, should not be by
It is considered as the limitation present invention other specific embodiments not detailed herein.
It is understood that the executive agent of the embodiment of the present application can be the end of smart mobile phone or tablet computer etc.
End equipment.
Referring to Fig. 1, Fig. 1 is the flow diagram of the choosing method of image provided by the embodiments of the present application, flow can be with
Including:
In step S101, the pending image that multiframe includes face is obtained.
It is a basic function of terminal to take pictures.With being constantly progressive for the hardware such as camera module and image processing algorithm,
The shooting function of terminal is stronger and stronger.User also more and more continually take pictures by using terminal, for example user often makes
With terminal taking personage photograph etc..However, in the related technology, the imaging effect of terminal the image collected is poor.
For example, terminal can first obtain the pending image that multiframe includes face.For example, terminal get A, B, C, D, E,
This pending image of six frames of F.
In step s 102, Expression Recognition is carried out to the facial image of each user in the pending image, obtained each
The Expression Recognition result of user.
For example, after obtaining A, B, C, D, E, F this pending image of six frames, terminal can be in the pending image of six frames
Each user facial image carry out Expression Recognition, to obtain the Expression Recognition result of each user.
For example, if six frame image of above-mentioned A, B, C, D, E, F is that terminal is continuous, Quick Acquisition arrives about same user's first
Single image, then terminal can carry out Expression Recognition to the facial image of user's first in the pending image of each frame, to
To the Expression Recognition result of user's first.
For another example, image, example if six frame image of above-mentioned A, B, C, D, E, F more people that continuous, Quick Acquisition arrives for terminal take a group photo
If above-mentioned six frames image is first, second, third, the group photo image of four people of fourth, then terminal can be first to user's first in this six frames image
Facial image carry out Expression Recognition, then Expression Recognition is carried out to the facial image of user's second, Yong Hubing, user's fourth successively, from
And respectively obtain user's first, second in this six frames image, third, the Expression Recognition result of four people of fourth.
In one embodiment, terminal can be come in the following way to identifying the facial image in pending image
Expression:First expression key point is determined from facial image, which can be such as eyes, eyebrow, mouth, face
The positions such as cheek.Then, terminal extracts topography according to these expression key points from facial image.Later, terminal can incite somebody to action
These topographies extracted input in trained Expression Recognition algorithm model and carry out Expression Recognition.
In step s 103, according to the Expression Recognition of each user as a result, choosing foundation drawing from the pending image
Picture.
For example, in obtaining pending image after the Expression Recognition result of each user, terminal can be according to each user
Expression Recognition as a result, from a pending middle selection wherein frame image as basic image.
It is understood that in the present embodiment, when needing to choose base image from the pending image of multiframe, terminal can
First to carry out Expression Recognition to the facial image of each user in the pending image, then further according to the expression of each user
Recognition result determines base image from the pending image.That is, the present embodiment can be according to the expression of user from pending
Base image is selected in image.Therefore, the present embodiment can improve terminal and choose from multiple image for being handled
Flexibility when image.
Referring to Fig. 2, Fig. 2 is another flow diagram of the choosing method of image provided by the embodiments of the present application, flow
May include:
In step s 201, terminal obtains the pending image that multiframe includes face.
In one embodiment, terminal camera the image collected can be saved in the buffer queue of a fixed length,
So when needing to obtain image, terminal can be obtained from the buffer queue.
For example, in the present embodiment, user's using terminal camera shoots character image, then when terminal collected figure recently
As that can be saved in buffer queue, after user, which presses, takes pictures button, terminal can obtain more from the buffer queue
Frame includes the image of face, these images are pending image.
For example, terminal gets H, I, J, K, L, M this pending image of six frames.This pending image of six frames be all about
First, second, third, the group photo image of four people of fourth.
In step S202, the facial image of each user carries out Expression Recognition in the terminal-pair pending image, obtains
The Expression Recognition result of each user.
For example, after getting about first, second, third, group photo image H, I, J, K, L, M of four people of fourth, terminal can to this 6
The facial image of each user carries out Expression Recognition in the pending image of frame, to obtain the Expression Recognition result of each user.
For example, terminal can first in this six frames image user's first facial image carry out Expression Recognition, then successively to
Family second, Yong Hubing, user's fourth facial image carry out Expression Recognition, to respectively obtain in H, I, J, K, L, M this six frames image
User's first, second, third, the Expression Recognition result of four people of fourth.
In obtaining pending image after the Expression Recognition result of each user, terminal can be according to the expression of each user
Recognition result, judges whether the expression of this user in pending image changes.
If judging that expression of all users in pending image is not sent out according to the Expression Recognition result of each user
Changing, then entering step in S203.
If judging that the expression of certain user in pending image becomes according to the Expression Recognition result of each user
Change, then entering step in S206.
In step S203, if determining all users in the pending image according to the Expression Recognition result of each user
In expression do not change, then terminal obtains the eye value of each user in each pending image, which is to use
In the numerical value for indicating eye size.
For example, terminal judges the expression of user's first in this pending image of 6 frame according to the Expression Recognition result of user's first
H, it not changing in I, J, K, L, M, i.e., terminal judges that expression of user's first in this pending image of 6 frame is consistent, or
Person's variation is extremely small.Similarly, terminal judges that user's second, third, the expression of fourth exist according to the Expression Recognition result of each user
It does not change in this pending image of 6 frame.In this case, terminal can obtain each in each pending image
The eye value of user.Wherein, which can be intended to indicate that the numerical value of eye size.For example, the eye value can be table
Show the size of eye numerical value or the eye value can be the height for indicating eye vertical direction numerical value.
In step S204, according to the eye value of each user in each pending image, terminal determines each use
The target facial image at family, the target facial image are the image corresponding to the maximum value in the eye value of user.
For example, eye value of user's first in pending image H, I, J, K, L, M is respectively 70,72,75,80,78,79.
The numerical value of eyes size of the second in pending image H, I, J, K, L, M is respectively 80,80,81,82,85,82.Third pending
The numerical value of eyes size in image H, I, J, K, L, M is respectively 80,82,82,50,30,0.Fourth pending image H, I, J,
K, the numerical value of the eyes size in L, M is respectively 82,83,84,88,85,81.Wherein, the eyes by pending image third are big
It is found that third eyes constantly become smaller since image J, this can consider that third is blinking for small variation.Third is in wherein image M
Closed-eye state (eyes size is 0).
After obtaining eye value of each user in the pending image of each frame, terminal can be from pending image
Determine the target facial image of each user.Wherein, the target facial image of each user can be the eye value of the user
In maximum value corresponding to facial image.
For example, eye value 80 of user's first in pending image K be in image to be handled in the eye value of first
Maximum value, therefore the facial image of user's first in pending image K can be determined as the target facial image of user's first by terminal.
Similarly, the facial image of user's second in pending image L can be determined as the target face figure of user's second by terminal
The facial image of user third in pending image I or J is determined as the target facial image of user third by picture, by pending figure
As the facial image of user's fourth in K is determined as the target facial image of user's fourth.
In step S205, terminal will be chosen for foundation drawing comprising the largest number of pending images of target facial image
Picture.
For example, after determining the target facial image of each user, terminal can be by comprising target facial image
The most pending image of number is chosen for base image.
For example, due to including a target facial image (the target facial image of user third) in pending image I and J,
Include two target facial images (the target facial image of user's first and fourth) in pending image K, includes in pending image L
One target facial image (the target facial image of user's second) does not include target facial image in other pending images.Cause
This, pending image K can be chosen for base image by terminal.
In step S206, if determining that there are expressions in the pending image according to the Expression Recognition result of each user
The facial image of changed user, then terminal the changed user of the expression is determined as target user.
In step S207, for each target user, the facial image that expression is met preset condition by terminal determines
For the target facial image of the target user.
In step S208, for each non-targeted user, terminal obtains each non-targeted in each pending image
The eye value of user, which is the numerical value for indicating eye size, by the image corresponding to the maximum value in eye value
It is determined as the target facial image of non-targeted user.
In step S209, terminal will be chosen for foundation drawing comprising the largest number of pending images of target facial image
Picture.
For example, step S206, S207, S208 and S209 may include:
Terminal is according to the Expression Recognition of each user as a result, determining that there are the changed use of expression in pending image
The facial image at family, i.e. terminal judge that the expression of certain user in pending image changes.For example, pending image is
O、P、Q、R、S、T.According to Expression Recognition as a result, terminal determine the expression of the user third in this six frames image from without smile to
Including smile.User third is respectively 82,80,60,50,30,0 in the eye value of pending image O, P, Q, R, S, T.Namely
It says, from image O to image T, the eyes of user third taper into, but the expression of user third is from without smiling to including smile.
Also, user third is schoolgirl, and when smiling, her eyes also and then laugh it up so that the effect for bending up and is presented in eyes.Exactly
Since eyes bend up user third when smiling, the eyes of user third is caused to taper into.In this case, terminal can be with
The changed user of expression third is determined as target user.
For target user, the facial image that its expression can be met preset condition by terminal is determined as the target user's
Target facial image.For example, preset condition, which can be user, laughs to obtain most magnificent expression.
For example, since terminal detects the facial image for existing in the facial image of user third and including smile expression,
User in facial image comprising smile expression third can be laughed to obtain most magnificent facial image by terminal, be determined as the mesh of user third
Mark facial image.For example, terminal detects that the expression of user third in pending image T is laughed at most magnificent, then terminal can incite somebody to action
The facial image of user third is determined as the target facial image of user third in image T.
And for each non-targeted user's (i.e. expression does not have changed user), terminal can obtain each pending
The eye value of each non-targeted user in image.Wherein, which can be intended to indicate that the numerical value of eye size.Then,
Facial image corresponding to maximum value in the eye value of each user can be determined as the target of the non-targeted user by terminal
Facial image.
For example, the expression of user's first, second, fourth in pending image do not change or change it is extremely small, because
User's first, second, fourth are determined as non-targeted user by this terminal.Wherein, eye of user's first in pending image O, P, Q, R, S, T
Portion's value is respectively 70,71,72,72,72,73, eye value of user's second in pending image O, P, Q, R, S, T be respectively 80,
81,81,83,82,82, eye value of user's fourth in pending image O, P, Q, R, S, T be respectively 82,83,83,84,85,
84。
So, the facial image of user's first in pending image T can be determined as the target face figure of user's first by terminal
The facial image of user's second in pending image R is determined as the target facial image of user's second by picture, will be in pending image S
The facial image of user's fourth is determined as the target facial image of user's fourth.
Then, terminal can will be chosen for base image comprising the largest number of pending images of target facial image.Example
Such as, due to including two target facial images (user's first and third target facial image) in pending image T, and pending figure
Include a target face in pending image S as including a target facial image (the target facial image of user's second) in R
Image (the target facial image of user's fourth).Therefore, pending image T can be determined as base image by terminal.
In step S210, terminal determines facial image to be replaced from base image, which is
The non-targeted facial image of user.
In step S211, from the pending image, terminal obtains the mesh for replacing each facial image to be replaced
Mark facial image, each facial image to be replaced and its facial image that corresponding target facial image is same subscriber.
In step S212, terminal carries out image to each facial image to be replaced using corresponding target facial image and replaces
Processing is changed, the base image for replacing processing by image is obtained.
For example, step S210, S211 and S212 may include:
After determining base image, terminal can determine facial image to be replaced from the base image.Wherein, eventually
Non-targeted facial image in base image can be determined as facial image to be replaced by end.
For example, in base image T, the facial image of user's second and user's fourth is not respective target facial image, therefore
The facial image of user's second and user's fourth in base image T can be determined as facial image to be replaced by terminal.
Then, terminal can obtain each to be replaced for replacing from the pending image of other except base image
The target facial image of facial image.It is understood that each facial image to be replaced and its corresponding target facial image
For the facial image of same subscriber.
For example, the facial image of user's second is the target facial image of user's second, pending image S in pending image R
The facial image of middle user's fourth is the target facial image of user's fourth, then terminal can obtain user in pending image R
The facial image of user's fourth in the facial image of second and pending image S.
After the target facial image for obtaining each facial image to be replaced, terminal can use corresponding target face figure
As carrying out image replacement processing to each facial image to be replaced, to obtain replacing the base image of processing by image.
For example, in base image T, terminal can replace with the facial image to be replaced of user's second in base image T
The target facial image of user's second in pending image R, the facial image to be replaced of user's fourth in base image T is replaced with and is waited for
The target facial image for handling user's fourth in image S, to obtain replacing the base image T of processing by image.
It is understood that it is this to replace the facial image of each user in treated base image T by image
The target facial image of user.For example, by image replacement processing after, in base image T, Yong Hujia, second, fourth face figure
Seem the corresponding facial image of maximum value of respective eye value, the facial image of user third is that user third laughs to obtain most magnificent people
Face image.
In one embodiment, terminal can also carry out numeralization expression to the expression of each user.For example, for product
The expression of pole, terminal can assign the numerical value of positive number, and for passive expression, terminal can assign the numerical value of negative.For positive
Expression, terminal can assign different positives according to the performance degree of expression.Such as it is smaller for smile's amplitude of smile
Expression, terminal can assign a smaller positive number of numerical value, and the expression that smile's amplitude for smile is larger, and terminal can be with
Assign a numerical value larger positive number.In this case, the expression of user is just all indicated with expression value.
Then, terminal can be according to expression value and eye value, the comprehensive target person for determining user from pending image
Face image.For example, for the facial image of the changed user of expression, terminal can assign a larger power to expression value
Weight, and assign eye value one smaller weight.For example, the weight of expression value is 85%, the weight of eye value is 15%.
By taking the facial image of user third as an example, eye value of the user third in pending image O, P, Q, R, S, T is respectively
82、80、60、50、30、0.And eye value of the user third in pending image O, P, Q, R, S, T is respectively 10,20,30,40,
50、60.So, the integrated value of user third is 20.8 (82*15%+10*85%) in pending image O.Similarly, pending image
P, the integrated value of Q, R, S, T are followed successively by 29,34.5,41.5,47,51.So, since the integrated value of pending image T is maximum, because
The facial image of user third in pending image T can be determined as the target facial image of user third by this terminal.Then, terminal
The largest number of pending images comprising target facial image are determined as base image again.
In one embodiment, before obtaining the step of multiframe includes the pending image of face, can also include
Following steps:
When acquisition includes the image of face, terminal determines target frame number according to collected at least two field pictures.
So, terminal obtains the step of multiframe includes the pending image of face, may include:From collected multiframe figure
As in, terminal obtains the pending image that quantity is the target frame number.
For example, after entering camera preview interface, if detecting the image that terminal includes face in acquisition, terminal
The image that face can be included according to collected at least two frames, determines a target frame number.In one embodiment, should
Target frame number can be greater than or equal to 2.
For example, when terminal collects four frames and includes the image of face, terminal can detect the face in this four frames image
Whether the position at place is subjected to displacement.If not being subjected to displacement or displacement very little, it may be considered that the facial image ratio in image
Relatively stablize, i.e., user without shaking or rotating head on a large scale.If being subjected to displacement, it may be considered that facial image is unstable,
I.e. user shakes or has rotated head, and amplitude is larger.
In one embodiment, whether the face that can in the following way come in detection image is subjected to displacement:It is obtaining
After getting four frame images of acquisition, terminal can generate a coordinate system, and then terminal can be in a like fashion by each frame
Image is put into the coordinate system.Later, terminal can obtain the features of human face images in each frame image in the coordinate system
Coordinate.After the characteristic point coordinate in the coordinate system of the facial image in obtaining each frame image, terminal can compare
Whether the coordinate compared with the same features of human face images in different images is identical.If identical, it may be considered that the face in image
Image is not subjected to displacement.If it is different, it may be considered that the facial image in image is subjected to displacement.If detecting facial image
Displacement, then terminal can obtain specific shift value.If the specific shift value is within the scope of default value, can
To think that the facial image displacement in image is smaller.If the specific shift value is in outside default value range, then can be with
Think that the facial image displacement in image is larger.
In one embodiment, if for example, facial image is subjected to displacement, target frame number can be determined as to 4 frames.If
Facial image is not subjected to displacement, then target frame number can be determined as to 6 frames or 8 frames.
After user presses and takes pictures button, terminal can be from nearest the image collected, and acquisition quantity is target frame number
Pending image.
In one embodiment, it after the step of obtaining replacing the base image of processing by image, can also wrap
Include following steps:
According to pending image, the terminal-pair base image that processing is replaced by image carries out image noise reduction processing.
For example, obtaining after image replaces the base image of processing, terminal can be according to pending image, to the warp
Cross the base image progress image noise reduction processing that image replaces processing.Exist comprising base image for example, terminal can obtain one group
The image that interior continuous acquisition arrives, and the base image handled is replaced by image to this according to this group of image and carries out multiframe noise reduction
Processing.
For example, since base image is image T, terminal can obtain pending image Q, R, S, and according to image Q,
R, S carries out multiframe noise reduction process to the base image T for replacing processing by image.
In one embodiment, when carrying out multiframe noise reduction, image Q, R, S, T can be first aligned by terminal, and be obtained
The pixel value of each group of snap to pixels in image.If the pixel value of same group of snap to pixels is not much different, then terminal can be counted
The pixel value mean value of this group of snap to pixels is calculated, then replaces with the pixel value mean value pixel value of the respective pixel of image T.If same
The pixel value difference of one group of snap to pixels is more, then can not be adjusted to the pixel value in image T.
For example, pixel P2, the pixel P3 in image S and the pixel in image T in pixel P1, image R in image Q
P4 is one group of pixel being mutually aligned, and the pixel value that the pixel value that wherein pixel value of P1 is 101, P2 is 102, P3 is 103, P4
Pixel value be 104, then the pixel value mean value for the pixel that this group is mutually aligned be 102.5, then terminal can be by image T
In the pixel values of P4 pixels be adjusted to 102.5 by 104, to carry out noise reduction process to the P4 pixels in image T.If the picture of P1
The pixel value that the pixel value that the pixel value that element value is 80, P2 is 83, P3 is 90, P4 is 103, then due to their pixel value phase
Difference is more, and terminal can not adjust the pixel value of P4 at this time, i.e. the pixel value holding 104 of P4 is constant.
Fig. 3 to Fig. 5 is please referred to, Fig. 3 to Fig. 5 is the scene signal of the choosing method of image provided by the embodiments of the present application
Figure.
In the present embodiment, after the preview interface for entering camera, if detecting terminal in acquisition facial image, eventually
End can acquire current environmental parameter, and according to collected at least two frame facial images, determine a target frame number.It should
Environmental parameter can be environmental light brightness.
If terminal according to collecting at least two frame facial images, determine the face in image be not subjected to displacement (or
Displacement very little), and it is currently at light environment, then target frame number can be determined as 8 frames by terminal.If terminal is according to acquisition
To at least two frame facial images, determine that the face in image is not subjected to displacement (or displacement very little), and be currently at
Half-light environment, then target frame number can be determined as 6 frames by terminal.If terminal according to collecting at least two frame facial images,
Determine that the face in image is subjected to displacement, then target frame number can be determined as 4 frames by terminal.
The image collected can be saved in buffer queue by terminal.The buffer queue can be fixed length queue, such as should
Buffer queue can preserve the newest collected 10 frame image of terminal.
For example, first, second, third, fourth, penta 5 people travel outdoors, and prepare to take pictures by landscape at one.Wherein, first uses eventually
End is first taken pictures for second, as shown in Figure 3.For example, after the preview interface for entering camera, terminal is joined according to current collected environment
Number acquires a frame image every 50 milliseconds.Before first presses the button of taking pictures of camera, terminal can be obtained first from buffer queue
Collected 4 frame image, it is to be understood that include the facial image of second in this 4 frame image.Then, terminal can detect
Whether position of the facial image of second in picture is subjected to displacement in this 4 frame image.If not being subjected to displacement or displacement very little,
It is considered that the facial image of second is more stable, i.e., second without shaking or rotating head on a large scale.It, can be with if being subjected to displacement
Think that the facial image of second is unstable, i.e., second shakes or has rotated head, and amplitude is larger.For example, in the present embodiment, terminal
Detect that position of the facial image of second in above-mentioned 4 frame image in picture is not subjected to displacement.
Then, terminal can obtain current environmental light brightness, and according to the environmental light brightness, and whether judgement is currently in
Half-light environment.For example, terminal is judged to be currently at half-light environment.
Later, terminal can be according to the above-mentioned information got:Position does not occur for position of the facial image of second in picture
It moves, and is currently at half-light environment, determine a target frame number.For example, the target frame number determined is 6 frames.
Hereafter, after first, which is pressed, takes pictures button, terminal can obtain the collected image about second of 6 frames.For example, terminal
Nearest image of collected 6 frame about second can be obtained from buffer queue, such as according to time order and function, this 6 frame image point
It Wei not A, B, C, D, E, F.It is understood that this six frames image A, B, C, D, E, F are the pending image that terminal is got.
After getting 6 frame images, terminal can carry out the 6 frame image Expression Recognition of face, and in detection image
The eye value of facial image, the eye value are intended to indicate that the numerical value of eye size.For example, second in A, B, C, D, E, F image
Eye value is respectively 80,82,83,84,85,84.For example, terminal detects that expression of user's second in this 6 frame image does not become
Change.
In the case where determining that expression of user's second in pending image does not change, due to this 6 frame image
For the single image of second, therefore that maximum frame image of eye value in this 6 frame image can be determined as base image by terminal, i.e.,
Image E is confirmed as base image.
After image E is determined as base image, terminal can carry out at multiframe noise reduction image E according to image C, D, F
Reason.After carrying out multiframe noise reduction process to image E, terminal can by the image E storages to photograph album after noise reduction process at
For a photo.It is understood that image E is the big eye photo of the second taken.
Later, penta is first, second, third, four people of fourth shooting group photo.For example, after the preview interface for entering camera, terminal detection
It is not subjected to displacement to position of the facial image in picture of the first and second the third four people of fourth in collected 4 frame image, and currently locates
In half-light environment.Based on this, terminal determines that target frame number is 6 frames.
Hereafter, after penta, which presses, takes pictures button, terminal can obtain the collected image about the first and second the third fourths of 6 frames, such as
Shown in Fig. 4.For example, terminal can obtain image of nearest collected 6 frame about the first and second the third fourths from buffer queue.For example,
According to time order and function, this 6 frame image is respectively O, P, Q, R, S, T.It is understood that this 6 frame image is pending image.
Later, terminal can carry out Expression Recognition to the facial image of each user in this pending image of 6 frame, to
To the Expression Recognition result of each user.In obtaining pending image after the Expression Recognition result of each user, terminal can be with
According to the Expression Recognition of each user as a result, judging whether the expression of this user in pending image changes.
For example, according to Expression Recognition as a result, terminal determines the expression of the user third in this six frames image from without smile
To including smile.User third is respectively 82,80,60,50,30,0 in the eye value of pending image O, P, Q, R, S, T.Namely
It says, from image O to image T, the eyes of user third taper into, but the expression of user third is from without smiling to including smile.
In this case, the changed user of expression third can be determined as target user by terminal.For example, user third is from image
The facial image of O to image T are as shown in Figure 5.
Since terminal detects the facial image for existing in the facial image of user third and including smile expression, terminal can
With by include smile expression facial image in user third laugh to obtain most magnificent facial image, be determined as the target face of user third
Image.For example, terminal detects that the expression of user third in pending image T is laughed at most magnificent, then terminal can will be in image T
The facial image of user third is determined as the target facial image of user third.
In addition, the expression of user's first, second, fourth in pending image do not change or change it is extremely small, because
User's first, second, fourth are determined as non-targeted user by this terminal.Wherein, terminal get user's first pending image O, P, Q,
R, the eye value in S, T is respectively 70,71,72,72,72,73, eye of user's second in pending image O, P, Q, R, S, T
Value is respectively 80,81,81,83,82,82, eye value of user's fourth in pending image O, P, Q, R, S, T be respectively 82,83,
83、84、85、84。
So, the facial image of user's first in pending image T can be determined as the target face figure of user's first by terminal
The facial image of user's second in pending image R is determined as the target facial image of user's second by picture, will be in pending image S
The facial image of user's fourth is determined as the target facial image of user's fourth.
Then, terminal can will be chosen for base image comprising the largest number of pending images of target facial image.Example
Such as, due to including two target facial images (user's first and third target facial image) in pending image T, and pending figure
Include a target face in pending image S as including a target facial image (the target facial image of user's second) in R
Image (the target facial image of user's fourth).Therefore, pending image T can be determined as base image by terminal.
In base image T, since the facial image of user's second and user's fourth is not respective target facial image,
The facial image of user's second and user's fourth in base image T can be determined as facial image to be replaced by terminal.
For example, the facial image of user's second is the target facial image of user's second, pending image S in pending image R
The facial image of middle user's fourth is the target facial image of user's fourth, then terminal can obtain user in pending image R
The facial image of user's fourth in the facial image of second and pending image S.
Then, the facial image to be replaced of user's second in base image T can be replaced in pending image R and be used by terminal
The facial image to be replaced of user's fourth in base image T is replaced with user in pending image S by the target facial image of family second
The target facial image of fourth, to obtain replacing the base image T of processing by image.
It is understood that it is this to replace the facial image of each user in treated base image T by image
The target facial image of user.For example, by image replacement processing after, in base image T, Yong Hujia, second, fourth face figure
Seem the corresponding facial image of maximum value of respective eye value, the facial image of user third is that user third laughs to obtain most magnificent people
Face image.
Later, terminal can obtain pending image Q, R, S, and according to image Q, R, S to replacing processing by image
Base image T carries out multiframe noise reduction process.Then, terminal can will become in the image T storages to photograph album Jing Guo noise reduction process
Photo.
It is understood that storage either in big eye shape state or is in the first and second the third four people of fourth in the photo in photograph album
Expression (smile) optimum state, therefore the present embodiment can improve the imaging effect of the photo of terminal taking, improve user experience.
Referring to Fig. 6, Fig. 6 is the structural schematic diagram of the selecting device of image provided by the embodiments of the present application.The choosing of image
The device 300 is taken to may include:Acquisition module 301, identification module 302, and choose module 303.
Acquisition module 301, the pending image for including face for obtaining multiframe.
Identification module 302 carries out Expression Recognition for the facial image to each user in the pending image, obtains
The Expression Recognition result of each user.
Choose module 303, for according to the Expression Recognition of each user as a result, being chosen from the pending image
Base image.
In one embodiment, module 303 is chosen can be used for:
If determining expression of all users in the pending image not according to the Expression Recognition result of each user
It changes, then obtains the eye value of each user in each pending image, the eye value is for indicating eye
The numerical value of size;
According to the eye value of each user in each pending image, foundation drawing is chosen from the pending image
Picture.
In one embodiment, module 303 is chosen can be used for:
According to the eye value of each user in each pending image, the target face figure of each user is determined
Picture, the target facial image are the image corresponding to the maximum value in the eye value of user;
It will be chosen for base image comprising the largest number of pending images of target facial image.
In one embodiment, module 303 is chosen can be used for:
If it is changed to determine in the pending image that there are expressions according to the Expression Recognition result of each user
The changed user of the expression is then determined as target user by the facial image of user;
For each target user, the facial image that expression is met to preset condition is determined as the target user's
Target facial image;
For each non-targeted user, the eye value of each non-targeted user in each pending image, institute are obtained
It is the numerical value for indicating eye size to state eye value, and the image corresponding to the maximum value in eye value is determined as non-targeted use
The target facial image at family;
It will be chosen for base image comprising the largest number of pending images of target facial image.
Also referring to Fig. 7, Fig. 7 is another structural schematic diagram of the selecting device of image provided by the embodiments of the present application.
In one embodiment, the selecting device 300 of image can also include:Acquisition module 304 and processing module 305.
Acquisition module 304, for when acquisition includes the image of face, being determined according to collected at least two field pictures
Target frame number.
So, acquisition module 301 can be used for:From collected multiple image, acquisition quantity is the target frame number
Pending image.
Processing module 305 is used for described after the step of choosing base image in the pending image:
Determine that facial image to be replaced, the facial image to be replaced are the non-targeted face of user from base image
Image;
From the pending image, the target facial image for replacing each facial image to be replaced is obtained, it is each
Facial image to be replaced and its facial image that corresponding target facial image is same subscriber;
Image replacement processing is carried out to each facial image to be replaced using corresponding target facial image, is obtained by figure
Base image as replacing processing.
In one embodiment, processing module 305 can be also used for:According to the pending image, to the process
The base image that image replaces processing carries out image noise reduction processing.
The embodiment of the present application provides a kind of computer-readable storage medium, computer program is stored thereon with, when described
When computer program executes on computers so that the computer executes in the choosing method such as image provided in this embodiment
The step of.
The embodiment of the present application also provides a kind of electronic equipment, including memory, and processor, the processor is by calling institute
The computer program stored in memory is stated, the step in choosing method for executing image provided in this embodiment.
For example, above-mentioned electronic equipment can be the mobile terminals such as tablet computer or smart mobile phone.Referring to Fig. 8,
Fig. 8 is the structural schematic diagram of mobile terminal provided by the embodiments of the present application.
The mobile terminal 400 may include the components such as camera module 401, memory 402, processor 403.Art technology
Personnel are appreciated that mobile terminal structure shown in Fig. 8 does not constitute the restriction to mobile terminal, may include than illustrating more
More or less component either combines certain components or different components arrangement.
Camera module 401 may include single camera module and double camera modules.
Memory 402 can be used for storing application program and data.Include that can hold in the application program that memory 402 stores
Line code.Application program can form various functions module.Processor 403 is stored in the application journey of memory 402 by operation
Sequence, to perform various functions application and data processing.
Processor 403 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the application program being stored in memory 402, and is called and is stored in memory 402
Data execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.
In the present embodiment, the processor 403 in mobile terminal can be according to following instruction, will be one or more
The corresponding executable code of process of application program is loaded into memory 402, and is stored in storage by processor 403 to run
Application program in device 402, to realize step:
Obtain the pending image that multiframe includes face;The facial image of each user in the pending image is carried out
Expression Recognition obtains the Expression Recognition result of each user;According to the Expression Recognition of each user as a result, from described wait for
Base image is chosen in reason image.
The embodiment of the present invention also provides a kind of electronic equipment.Above-mentioned electronic equipment includes image processing circuit, at image
Managing circuit can utilize hardware and or software component to realize, it may include define ISP (Image Signal Processing, figure
As signal processing) the various processing units of pipeline.Fig. 9 is the structural schematic diagram of image processing circuit in one embodiment.Such as Fig. 9
It is shown, for purposes of illustration only, only showing the various aspects with the relevant image processing techniques of the embodiment of the present invention.
As shown in figure 9, image processing circuit includes image-signal processor 540 and control logic device 550.Imaging device
510 image datas captured are handled by image-signal processor 540 first, and image-signal processor 540 carries out image data
Analysis is to capture the image statistics for the one or more control parameters that can be used for determining and/or imaging device 510.Imaging is set
Standby 510 may include the camera with one or more lens 511 and imaging sensor 512.Imaging sensor 512 may include color
Color filter array (such as Bayer filters), imaging sensor 512 can be obtained to be captured with each imaging pixel of imaging sensor 512
Luminous intensity and wavelength information, and provide one group of raw image data being handled by image-signal processor 540.Sensor
520 can be supplied to image-signal processor 540 based on 520 interface type of sensor raw image data.520 interface of sensor
SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other can be utilized
The combination of serial or parallel camera interface or above-mentioned interface.
Image-signal processor 540 handles raw image data pixel by pixel in various formats.For example, each image slices
Element can carry out one or more with the bit depth of 8,10,12 or 14 bits, image-signal processor 540 to raw image data
The statistical information of a image processing operations, collection about image data.Wherein, image processing operations can be by identical or different position
Depth accuracy carries out.
Image-signal processor 540 can also receive pixel data from video memory 530.For example, from 520 interface of sensor
Raw pixel data is sent to video memory 530, the raw pixel data in video memory 530 is available to image letter
Number processor 540 is for processing.Video memory 530 can be in a part, storage device or electronic equipment for memory device
Independent private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 520 interface of sensor or from video memory 530, picture signal
Processor 540 can carry out one or more image processing operations, such as time-domain filtering.Treated, and image data can be transmitted to image
Memory 530, to carry out other processing before shown.Image-signal processor 540 is received from video memory 530
Data are handled, and the image real time transfer in original domain and in RGB and YCbCr color spaces is carried out to the processing data.
Image data that treated may be output to display 570, so that user watches and/or by graphics engine or GPU (Graphics
Processing Unit, graphics processor) it is further processed.In addition, the output of image-signal processor 540 also can be transmitted to
Video memory 530, and display 570 can read image data from video memory 530.In one embodiment, image
Memory 530 can be configured as realizing one or more frame buffers.In addition, the output of image-signal processor 540 is transmittable
To encoder/decoder 560, so as to encoding/decoding image data.The image data of coding can be saved, and aobvious being shown in
It is decompressed before showing in 570 equipment of device.Encoder/decoder 560 can be realized by CPU or GPU or coprocessor.
The statistical data that image-signal processor 540 determines can be transmitted to control logic device 550.For example, statistical data can
It is passed including the images such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 511 shadow correction of lens
512 statistical information of sensor.Control logic device 550 may include the processor for executing one or more routines (such as firmware) and/or micro-
Controller, one or more routines according to the statistical data of reception, can determine imaging device 510 control parameter and control
Parameter.For example, control parameter may include 520 control parameter of sensor (such as time of integration of gain, spectrum assignment), camera
The combination for the control parameter, 511 control parameter of lens (such as focusing or zoom focal length) or these parameters of glistening.ISP control ginsengs
Number may include the gain level and color correction matrix for automatic white balance and color adjustment (for example, during RGB processing),
And 511 shadow correction parameter of lens.
It is the step of realizing the processing method of image provided in this embodiment with image processing techniques in Fig. 9 below:
Obtain the pending image that multiframe includes face;The facial image of each user in the pending image is carried out
Expression Recognition obtains the Expression Recognition result of each user;According to the Expression Recognition of each user as a result, from described wait for
Base image is chosen in reason image.
In one embodiment, electronic equipment executes the Expression Recognition result according to each user from described
When choosing the step of base image in pending image, it can execute:If the Expression Recognition result according to each user is determined
Expression of all users in the pending image does not change, then obtains each user in each pending image
Eye value, the eye value is numerical value for indicating eye size;According to each user in each pending image
Eye value, choose base image from the pending image.
In one embodiment, electronic equipment executes the eye according to each user in each pending image
When portion's value chooses the step of base image from the pending image, it can execute:According in each pending image
The eye value of each user determines that the target facial image of each user, the target facial image are the eye value of user
In maximum value corresponding to image;It will be chosen for base image comprising the largest number of pending images of target facial image.
In one embodiment, electronic equipment executes the Expression Recognition result according to each user from described
When choosing the step of base image in pending image, it can execute:If the Expression Recognition result according to each user is determined
There are the facial images of the changed user of expression in the pending image, then the changed user of the expression is true
It is set to target user;For each target user, the facial image that expression is met to preset condition is determined as the target
The target facial image of user;For each non-targeted user, each non-targeted user in each pending image is obtained
Eye value, the eye value is numerical value for indicating eye size, and the image corresponding to the maximum value in eye value is true
It is set to the target facial image of non-targeted user;To include based on the largest number of pending images of target facial image are chosen
Image.
In one embodiment, before the step of acquisition multiframe includes the pending image of face, electronics is set
It is standby to can also be performed:When acquisition includes the image of face, target frame number is determined according to collected at least two field pictures;
So, it when electronic equipment executes the step of the pending image of the acquisition multiframe comprising face, can execute:From
In collected multiple image, the pending image that quantity is the target frame number is obtained;
Described after the step of choosing base image in the pending image, electronics, which is set, can also be performed:From
Determine that facial image to be replaced, the facial image to be replaced are the non-targeted facial image of user in base image;From institute
It states in pending image, obtains the target facial image for replacing each facial image to be replaced, each face figure to be replaced
Picture and its facial image that corresponding target facial image is same subscriber;It waits replacing to each using corresponding target facial image
Substitution face image carries out image replacement processing, obtains the base image that processing is replaced by image.
In one embodiment, after described the step of obtaining replacing the base image of processing by image, electronics
Equipment can also be performed:According to the pending image, image drop is carried out to the base image for replacing processing by image
It makes an uproar processing.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, the detailed description of the choosing method above with respect to image is may refer to, details are not described herein again.
The choosing method category of image in the selecting device and foregoing embodiments of described image provided by the embodiments of the present application
In same design, any provided in the choosing method embodiment of described image can be run on the selecting device of described image
Method, specific implementation process refer to the choosing method embodiment of described image, and details are not described herein again.
It should be noted that for the choosing method of the embodiment of the present application described image, those of ordinary skill in the art
It is appreciated that realize all or part of flow of the choosing method of the embodiment of the present application described image, being can be by computer journey
Sequence is completed to control relevant hardware, and the computer program can be stored in a computer read/write memory medium, such as deposit
Storage in memory, and is executed by least one processor, may include in the process of implementation such as the choosing method of described image
The flow of embodiment.Wherein, the storage medium can be magnetic disc, CD, read-only memory (ROM, Read Only
Memory), random access memory (RAM, Random Access Memory) etc..
For the selecting device of the described image of the embodiment of the present application, each function module can be integrated in a processing
Can also be that modules physically exist alone in chip, can also two or more modules be integrated in a module.
The form that hardware had both may be used in above-mentioned integrated module is realized, can also be realized in the form of software function module.It is described
If integrated module is realized in the form of software function module and when sold or used as an independent product, can also be stored
In a computer read/write memory medium, the storage medium is for example read-only memory, disk or CD etc..
Choosing method, device, storage medium and the electronics of a kind of image provided above the embodiment of the present application are set
Standby to be described in detail, principle and implementation of the present invention are described for specific case used herein, above
The explanation of embodiment is merely used to help understand the method and its core concept of the present invention;Meanwhile for those skilled in the art
Member, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion this explanation
Book content should not be construed as limiting the invention.
Claims (10)
1. a kind of choosing method of image, which is characterized in that including:
Obtain the pending image that multiframe includes face;
Expression Recognition is carried out to the facial image of each user in the pending image, obtains the Expression Recognition knot of each user
Fruit;
According to the Expression Recognition of each user as a result, choosing base image from the pending image.
2. the choosing method of image according to claim 1, which is characterized in that the expression according to each user
Recognition result chooses the step of base image from the pending image, including:
If determining that expression of all users in the pending image does not occur according to the Expression Recognition result of each user
Variation, then obtain the eye value of each user in each pending image, and the eye value is for indicating eye size
Numerical value;
According to the eye value of each user in each pending image, base image is chosen from the pending image.
3. the choosing method of image according to claim 2, which is characterized in that described according to each pending image
In the eye value of each user the step of base image is chosen from the pending image, including:
According to the eye value of each user in each pending image, the target facial image of each user, institute are determined
State the image corresponding to the maximum value in the eye value that target facial image is user;
It will be chosen for base image comprising the largest number of pending images of target facial image.
4. the choosing method of image according to claim 1, which is characterized in that the expression according to each user
Recognition result chooses the step of base image from the pending image, including:
If determining that there are the changed users of expression in the pending image according to the Expression Recognition result of each user
Facial image, then the changed user of the expression is determined as target user;
For each target user, the facial image that expression is met to preset condition is determined as the target of the target user
Facial image;
For each non-targeted user, the eye value of each non-targeted user in each pending image, the eye are obtained
Portion's value is the numerical value for indicating eye size, and the image corresponding to the maximum value in eye value is determined as non-targeted user's
Target facial image;
It will be chosen for base image comprising the largest number of pending images of target facial image.
5. the choosing method of image according to claim 1, which is characterized in that waited for comprising face in the acquisition multiframe
Before the step of handling image, further include:
When acquisition includes the image of face, target frame number is determined according to collected at least two field pictures;
The step of pending image that multiframe is obtained comprising face, including:From collected multiple image, quantity is obtained
For the pending image of the target frame number;
Described after the step of choosing base image in the pending image, further include:
Determine that facial image to be replaced, the facial image to be replaced are the non-targeted face figure of user from base image
Picture;
From the pending image, the target facial image for replacing each facial image to be replaced is obtained, it is each to wait replacing
Substitution face image and its facial image that corresponding target facial image is same subscriber;
Image replacement processing is carried out to each facial image to be replaced using corresponding target facial image, obtains replacing by image
Change the base image of processing.
6. the choosing method of image according to claim 5, which is characterized in that obtain handling by image replacement described
Base image the step of after, further include:
According to the pending image, image noise reduction processing is carried out to the base image for replacing processing by image.
7. a kind of selecting device of image, which is characterized in that including:
Acquisition module, the pending image for including face for obtaining multiframe;
Identification module carries out Expression Recognition for the facial image to each user in the pending image, obtains each use
The Expression Recognition result at family;
Choose module, for according to the Expression Recognition of each user as a result, choosing foundation drawing from the pending image
Picture.
8. the selecting device of image according to claim 7, which is characterized in that the selection module is used for:
If determining that there are the changed users of expression in the pending image according to the Expression Recognition result of each user
Facial image, then the changed user of the expression is determined as target user;
For each target user, the facial image that expression is met to preset condition is determined as the target of the target user
Facial image;
For each non-targeted user, the eye value of each non-targeted user in each pending image, the eye are obtained
Portion's value is the numerical value for indicating eye size, and the image corresponding to the maximum value in eye value is determined as non-targeted user's
Target facial image;
It will be chosen for base image comprising the largest number of pending images of target facial image.
9. a kind of storage medium, is stored thereon with computer program, which is characterized in that when the computer program on computers
When execution so that the computer executes such as method according to any one of claims 1 to 6.
10. a kind of electronic equipment, including memory, processor, which is characterized in that the processor is by calling the memory
The computer program of middle storage, for executing such as method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810277025.4A CN108574803B (en) | 2018-03-30 | 2018-03-30 | Image selection method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810277025.4A CN108574803B (en) | 2018-03-30 | 2018-03-30 | Image selection method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108574803A true CN108574803A (en) | 2018-09-25 |
CN108574803B CN108574803B (en) | 2020-01-14 |
Family
ID=63574060
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810277025.4A Active CN108574803B (en) | 2018-03-30 | 2018-03-30 | Image selection method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108574803B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062279A (en) * | 2019-12-04 | 2020-04-24 | 深圳先进技术研究院 | Picture processing method and picture processing device |
CN111263073A (en) * | 2020-02-27 | 2020-06-09 | 维沃移动通信有限公司 | Image processing method and electronic device |
CN111259689A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for transmitting information |
CN112036311A (en) * | 2020-08-31 | 2020-12-04 | 北京字节跳动网络技术有限公司 | Image processing method and device based on eye state detection and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101419666A (en) * | 2007-09-28 | 2009-04-29 | 富士胶片株式会社 | Image processing apparatus, image capturing apparatus, image processing method and recording medium |
US20120169895A1 (en) * | 2010-03-24 | 2012-07-05 | Industrial Technology Research Institute | Method and apparatus for capturing facial expressions |
CN104243818A (en) * | 2014-08-29 | 2014-12-24 | 小米科技有限责任公司 | Image processing method and device and image processing equipment |
CN104899544A (en) * | 2014-03-04 | 2015-09-09 | 佳能株式会社 | Image processing device and image processing method |
CN105635567A (en) * | 2015-12-24 | 2016-06-01 | 小米科技有限责任公司 | Shooting method and device |
CN107566748A (en) * | 2017-09-22 | 2018-01-09 | 维沃移动通信有限公司 | A kind of image processing method, mobile terminal and computer-readable recording medium |
CN107734253A (en) * | 2017-10-13 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107817939A (en) * | 2017-10-27 | 2018-03-20 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
-
2018
- 2018-03-30 CN CN201810277025.4A patent/CN108574803B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101419666A (en) * | 2007-09-28 | 2009-04-29 | 富士胶片株式会社 | Image processing apparatus, image capturing apparatus, image processing method and recording medium |
US20120169895A1 (en) * | 2010-03-24 | 2012-07-05 | Industrial Technology Research Institute | Method and apparatus for capturing facial expressions |
CN104899544A (en) * | 2014-03-04 | 2015-09-09 | 佳能株式会社 | Image processing device and image processing method |
CN104243818A (en) * | 2014-08-29 | 2014-12-24 | 小米科技有限责任公司 | Image processing method and device and image processing equipment |
CN105635567A (en) * | 2015-12-24 | 2016-06-01 | 小米科技有限责任公司 | Shooting method and device |
CN107566748A (en) * | 2017-09-22 | 2018-01-09 | 维沃移动通信有限公司 | A kind of image processing method, mobile terminal and computer-readable recording medium |
CN107734253A (en) * | 2017-10-13 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107817939A (en) * | 2017-10-27 | 2018-03-20 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259689A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for transmitting information |
CN111259689B (en) * | 2018-11-30 | 2023-04-25 | 百度在线网络技术(北京)有限公司 | Method and device for transmitting information |
CN111062279A (en) * | 2019-12-04 | 2020-04-24 | 深圳先进技术研究院 | Picture processing method and picture processing device |
CN111062279B (en) * | 2019-12-04 | 2023-06-06 | 深圳先进技术研究院 | Photo processing method and photo processing device |
CN111263073A (en) * | 2020-02-27 | 2020-06-09 | 维沃移动通信有限公司 | Image processing method and electronic device |
CN111263073B (en) * | 2020-02-27 | 2021-11-09 | 维沃移动通信有限公司 | Image processing method and electronic device |
CN112036311A (en) * | 2020-08-31 | 2020-12-04 | 北京字节跳动网络技术有限公司 | Image processing method and device based on eye state detection and storage medium |
US11842569B2 (en) | 2020-08-31 | 2023-12-12 | Beijing Bytedance Network Technology Co., Ltd. | Eye state detection-based image processing method and apparatus, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108574803B (en) | 2020-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108520493A (en) | Processing method, device, storage medium and the electronic equipment that image is replaced | |
CN101325659B (en) | Imaging device, imaging method | |
CN108574803A (en) | Choosing method, device, storage medium and the electronic equipment of image | |
CN108401110A (en) | Acquisition methods, device, storage medium and the electronic equipment of image | |
EP3134850A2 (en) | System and method for controlling a camera based on processing an image captured by other camera | |
CN108419012A (en) | Photographic method, device, storage medium and electronic equipment | |
CN108024054A (en) | Image processing method, device and equipment | |
CN110198418A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110266954A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110198419A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110445986A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110717871A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
JP2009272740A (en) | Imaging device, image selection method, and image selection program | |
CN108052883B (en) | User photographing method, device and equipment | |
CN108093170B (en) | User photographing method, device and equipment | |
CN108513068A (en) | Choosing method, device, storage medium and the electronic equipment of image | |
CN108259769B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN108282616B (en) | Processing method, device, storage medium and the electronic equipment of image | |
CN110278375A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110809797B (en) | Micro video system, format and generation method | |
CN110012227A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108520036A (en) | Choosing method, device, storage medium and the electronic equipment of image | |
CN109523456A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108462831A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108370415B (en) | Image processing apparatus and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: Guangdong Opel Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |