CN108574803B - Image selection method and device, storage medium and electronic equipment - Google Patents

Image selection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108574803B
CN108574803B CN201810277025.4A CN201810277025A CN108574803B CN 108574803 B CN108574803 B CN 108574803B CN 201810277025 A CN201810277025 A CN 201810277025A CN 108574803 B CN108574803 B CN 108574803B
Authority
CN
China
Prior art keywords
image
user
processed
images
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810277025.4A
Other languages
Chinese (zh)
Other versions
CN108574803A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810277025.4A priority Critical patent/CN108574803B/en
Publication of CN108574803A publication Critical patent/CN108574803A/en
Application granted granted Critical
Publication of CN108574803B publication Critical patent/CN108574803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The application discloses a method and a device for selecting an image, a storage medium and electronic equipment. The method comprises the following steps: acquiring a plurality of frames of images to be processed containing human faces; performing expression recognition on the facial image of each user in the image to be processed to obtain an expression recognition result of each user; and selecting a basic image from the image to be processed according to the expression recognition result of each user. The embodiment can improve the flexibility of the terminal in selecting the image for processing from the multi-frame images.

Description

Image selection method and device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image technologies, and in particular, to a method and an apparatus for selecting an image, a storage medium, and an electronic device.
Background
Photographing is a basic function of the terminal. With the continuous progress of hardware such as a camera module and an image processing algorithm, the shooting function of the terminal is more and more powerful. Users also use the terminal to take pictures more and more frequently, for example, users often use the terminal to take pictures of people, etc. In the related art, a terminal may collect a plurality of frames of images, and select an image for processing from the plurality of frames of images. However, when an image for processing is selected from a plurality of frame images, the terminal has poor flexibility in selecting the image.
Disclosure of Invention
The embodiment of the application provides an image selection method and device, a storage medium and electronic equipment, which can improve the flexibility of a terminal in selecting an image for processing from a plurality of frames of images.
The embodiment of the application provides a method for selecting an image, which comprises the following steps:
acquiring a plurality of frames of images to be processed containing human faces;
performing expression recognition on the facial image of each user in the image to be processed to obtain an expression recognition result of each user;
and selecting a basic image from the image to be processed according to the expression recognition result of each user.
The embodiment of the application provides a device for selecting images, which comprises:
the acquisition module is used for acquiring a plurality of frames of images to be processed containing human faces;
the recognition module is used for carrying out expression recognition on the face image of each user in the image to be processed to obtain an expression recognition result of each user;
and the selection module is used for selecting a basic image from the image to be processed according to the expression recognition result of each user.
The embodiment of the application provides a storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is enabled to execute the steps in the image selecting method provided by the embodiment of the application.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the steps in the image selection method provided in the embodiment of the present application by calling the computer program stored in the memory.
In this embodiment, when a basic image needs to be selected from multiple frames of images to be processed, the terminal may perform expression recognition on the face image of each user in the images to be processed, and then determine the basic image from the images to be processed according to the expression recognition result of each user. That is, the present embodiment may select a base image from the images to be processed according to the expression of the user. Therefore, the embodiment can improve the flexibility of the terminal in selecting the image for processing from the plurality of frames of images. In addition, the embodiment can also improve the imaging effect of the photo taken by the terminal.
Drawings
The technical solution and the advantages of the present invention will be apparent from the following detailed description of the embodiments of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an image selection method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of a method for selecting an image according to an embodiment of the present disclosure.
Fig. 3 to fig. 5 are scene schematic diagrams of a method for selecting an image according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an image selecting apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an image selecting apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Fig. 9 is another schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
It can be understood that the execution subject of the embodiment of the present application may be a terminal device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for selecting an image according to an embodiment of the present application, where the flow chart may include:
in step S101, a plurality of frames of images to be processed including a human face are acquired.
Photographing is a basic function of the terminal. With the continuous progress of hardware such as a camera module and an image processing algorithm, the shooting function of the terminal is more and more powerful. Users also use the terminal to take pictures more and more frequently, for example, users often use the terminal to take pictures of people, etc. However, in the related art, the image captured by the terminal is poor in imaging effect.
For example, the terminal may first acquire a plurality of frames of images to be processed including faces. For example, the terminal acquires A, B, C, D, E, F the six frames of images to be processed.
In step S102, facial images of each user in the image to be processed are subjected to expression recognition, so as to obtain an expression recognition result of each user.
For example, after the A, B, C, D, E, F six frames of images to be processed are acquired, the terminal may perform expression recognition on the facial image of each user in the six frames of images to be processed, so as to obtain an expression recognition result of each user.
For example, if the A, B, C, D, E, F six frames of images are single-person images of the same user a, which are continuously and rapidly acquired by the terminal, the terminal may perform expression recognition on the facial image of the user a in each frame of image to be processed, so as to obtain an expression recognition result of the user a.
For another example, if the A, B, C, D, E, F six frames of images are multi-person group images continuously and rapidly acquired by the terminal, for example, the six frames of images are group images of four persons, i.e., a person a, a person b, a person c, and a person d, the terminal may perform expression recognition on the facial image of the person a in the six frames of images, and then perform expression recognition on the facial images of the person b, the person c, and the person d in the six frames of images, so as to obtain expression recognition results of the four persons, i.e., the person a, the person b, the person c, and the person d in the six frames of images respectively.
In one embodiment, the terminal may recognize the expression of the face image in the image to be processed by: expression key points, which may be parts such as eyes, eyebrows, mouth, cheeks, etc., are first determined from the face image. Then, the terminal extracts a local image from the face image according to the expression key points. And then, the terminal can input the extracted local images into a trained expression recognition algorithm model for expression recognition.
In step S103, a base image is selected from the image to be processed according to the expression recognition result of each user.
For example, after obtaining the expression recognition result of each user in the image to be processed, the terminal may select one frame of image from the image to be processed as the base image according to the expression recognition result of each user.
It can be understood that, in this embodiment, when a basic image needs to be selected from multiple frames of images to be processed, the terminal may perform expression recognition on a face image of each user in the images to be processed, and then determine the basic image from the images to be processed according to an expression recognition result of each user. That is, the present embodiment may select a base image from the images to be processed according to the expression of the user. Therefore, the embodiment can improve the flexibility of the terminal in selecting the image for processing from the plurality of frames of images.
Referring to fig. 2, fig. 2 is another schematic flow chart of a method for selecting an image according to an embodiment of the present application, where the flow chart may include:
in step S201, the terminal acquires a plurality of frames of images to be processed including a human face.
In one embodiment, the images acquired by the terminal camera can be stored in a buffer queue with a certain length, and then the terminal can acquire the images from the buffer queue when the images need to be acquired.
For example, in this embodiment, a user uses a terminal camera to capture a person image, when all images recently captured by the terminal can be stored in a buffer queue, and when the user presses a capture button, the terminal can obtain a plurality of frames of images including a human face from the buffer queue, where the images are to-be-processed images.
For example, the terminal acquires H, I, J, K, L, M the six frames of images to be processed. The six frames of images to be processed are all the group photo images of four people, namely, A, B, C and D.
In step S202, the terminal performs expression recognition on the face image of each user in the image to be processed to obtain an expression recognition result of each user.
For example, after the group photo image H, I, J, K, L, M of four people, i.e., a person a, b, c, and d, is acquired, the terminal may perform expression recognition on the facial image of each user in the 6 frames of images to be processed, so as to obtain an expression recognition result of each user.
For example, the terminal may perform expression recognition on the facial image of the user a in the six frames of images, and then perform expression recognition on the facial images of the user b, the user c, and the user d in sequence, so as to obtain H, I, J, K, L, M expression recognition results of the four users a, b, c, and d in the six frames of images, respectively.
After the expression recognition result of each user in the image to be processed is obtained, the terminal can judge whether the expression of the user in the image to be processed changes or not according to the expression recognition result of each user.
If it is determined that the expressions of all the users in the image to be processed have not changed according to the expression recognition result of each user, the process proceeds to step S203.
If it is determined that the expressions of some users in the image to be processed all change according to the expression recognition result of each user, the process proceeds to step S206.
In step S203, if it is determined that the expressions of all the users in the image to be processed do not change according to the expression recognition result of each user, the terminal obtains an eye value of each user in each image to be processed, where the eye value is a numerical value used for indicating the size of eyes.
For example, the terminal determines that the expression of the user a does not change in the 6 frames of the to-be-processed image H, I, J, K, L, M according to the expression recognition result of the user a, that is, the terminal determines that the expression of the user a in the 6 frames of the to-be-processed image is consistent or changes very slightly. Similarly, the terminal judges that the expressions of the users B, C and D are not changed in the 6 frames of images to be processed according to the expression recognition results of the users. In this case, the terminal may acquire an eye value of each user in each image to be processed. The eye value may be a numerical value indicating the size of the eye. For example, the eye value may be a numerical value representing the size of the area of the eye, or the eye value may be a numerical value representing the height of the eye in the vertical direction.
In step S204, according to the eye value of each user in each image to be processed, the terminal determines a target face image of each user, where the target face image is an image corresponding to the maximum value of the eye values of the users.
For example, the eye values of the user a in the image to be processed H, I, J, K, L, M are 70, 72, 75, 80, 78, 79, respectively. The eye size values of b in the to-be-processed image H, I, J, K, L, M are 80, 81, 82, 85, 82, respectively. The eye sizes in the to-be-processed image H, I, J, K, L, M are given by the values 80, 82, 50, 30, 0, respectively. The numerical values of the eye sizes in the to-be-processed image H, I, J, K, L, M are 82, 83, 84, 88, 85, 81, respectively. Wherein, as can be seen from the change of the eye size of the third eye in the image to be processed, the third eye becomes smaller from the image J, which can be regarded as the third eye blinking. Wherein the third in the image M is in a closed eye state (eye size 0).
After obtaining the eye value of each user in each frame of the image to be processed, the terminal may determine a target face image of each user from the image to be processed. The target face image of each user may be a face image corresponding to a maximum value among eye values of the user.
For example, the eye value 80 of the user a in the image to be processed K is the maximum value among the eye values of the users a in all the images to be processed, so the terminal may determine the face image of the user a in the image to be processed K as the target face image of the user a.
Similarly, the terminal can determine the face image of the user B in the image L to be processed as the target face image of the user B, determine the face image of the user C in the image I or J to be processed as the target face image of the user C, and determine the face image of the user C in the image K to be processed as the target face image of the user C.
In step S205, the terminal selects the image to be processed containing the largest number of target face images as a base image.
For example, after the target face image of each user is determined, the terminal may select the image to be processed containing the largest number of target face images as the base image.
For example, the images to be processed I and J include one target face image (the target face image of the user c), the image to be processed K includes two target face images (the target face images of the users a and d), the image to be processed L includes one target face image (the target face image of the user b), and the other images to be processed do not include the target face image. Therefore, the terminal can select the image to be processed K as the base image.
In step S206, if it is determined that there is a facial image of the user whose expression changes in the image to be processed according to the expression recognition result of each user, the terminal determines the user whose expression changes as the target user.
In step S207, for each target user, the terminal determines a face image whose expression meets a preset condition as a target face image of the target user.
In step S208, for each non-target user, the terminal obtains an eye value of each non-target user in each to-be-processed image, where the eye value is a numerical value used for indicating the size of an eye, and determines an image corresponding to a maximum value of the eye values as a target face image of the non-target user.
In step S209, the terminal selects the image to be processed containing the largest number of target face images as a base image.
For example, steps S206, S207, S208, and S209 may include:
the terminal determines the facial image of the user with the changed expression in the image to be processed according to the expression recognition result of each user, namely the terminal judges that the expression of some users in the image to be processed is changed. For example, the image to be processed is O, P, Q, R, S, T. According to the expression recognition result, the terminal determines that the expression of the user C in the six frames of images is from no smile to smile. The eye values of the user on the to-be-processed image O, P, Q, R, S, T are 82, 80, 60, 50, 30, 0, respectively. That is, from image O to image T, the eyes of user c become gradually smaller, but the expression of user c changes from no smile to smile. And, user C is girl, and when smiling, her eyes also smile along with, makes the eyes present the effect of bending up. It is because the eyes of the user C get bent when the user C smiles, so that the eyes of the user C get smaller gradually. In this case, the terminal may determine the user whose expression changes as the target user.
For a target user, the terminal may determine a face image whose expression meets a preset condition as a target face image of the target user. For example, the preset condition may be that the user laughs the most brilliant expressions.
For example, since the terminal detects that a face image containing a smiling expression exists in the face image of the user c, the terminal may determine the face image which is laughter most among the face images containing the smiling expression as the target face image of the user c. For example, the terminal detects that the expression of user c in the image T to be processed laughs the most laughter, and then the terminal may determine the face image of user c in the image T as the target face image of user c.
For each non-target user (i.e. the user whose expression has not changed), the terminal may obtain the eye value of each non-target user in each image to be processed. The eye value may be a numerical value indicating the size of the eye. Then, the terminal may determine the face image corresponding to the maximum value of the eye values of each user as the target face image of the non-target user.
For example, the expressions of the users a, b and d in the image to be processed do not change or change very slightly, so the terminal determines the users a, b and d as non-target users. The eye values of the user a in the image to be processed O, P, Q, R, S, T are 70, 71, 72 and 73, the eye values of the user b in the image to be processed O, P, Q, R, S, T are 80, 81, 83, 82 and 82, and the eye values of the user b in the image to be processed O, P, Q, R, S, T are 82, 83, 84, 85 and 84.
Then, the terminal may determine the face image of the user a in the to-be-processed image T as a target face image of the user a, determine the face image of the user b in the to-be-processed image R as a target face image of the user b, and determine the face image of the user b in the to-be-processed image S as a target face image of the user b.
Then, the terminal can select the image to be processed containing the largest number of target face images as a basic image. For example, the image to be processed T includes two target face images (target face images of users a and c), the image to be processed R includes one target face image (target face image of user b), and the image to be processed S includes one target face image (target face image of user d). Therefore, the terminal can determine the image to be processed T as the base image.
In step S210, the terminal determines a face image to be replaced from the base image, where the face image to be replaced is a non-target face image of the user.
In step S211, the terminal acquires a target face image for replacing each face image to be replaced from the image to be processed, where each face image to be replaced and the corresponding target face image are face images of the same user.
In step S212, the terminal performs image replacement processing on each to-be-replaced face image by using the corresponding target face image, so as to obtain a basic image subjected to the image replacement processing.
For example, steps S210, S211, and S212 may include:
after the basic image is determined, the terminal can determine the face image to be replaced from the basic image. The terminal can determine the non-target face image in the basic image as the face image to be replaced.
For example, in the basic image T, the face images of the user b and the user d are not respective target face images, so the terminal may determine the face images of the user b and the user d in the basic image T as face images to be replaced.
Then, the terminal can acquire a target face image for replacing each face image to be replaced from other images to be processed except the base image. It can be understood that each face image to be replaced and the corresponding target face image are face images of the same user.
For example, the face image of the user b in the image to be processed R is the target face image of the user b, and the face image of the user d in the image to be processed S is the target face image of the user d, so that the terminal can obtain the face image of the user b in the image to be processed R and the face image of the user d in the image to be processed S.
After the target face image of each face image to be replaced is obtained, the terminal can use the corresponding target face image to perform image replacement processing on each face image to be replaced, so that a basic image subjected to image replacement processing is obtained.
For example, in the basic image T, the terminal may replace the face image to be replaced of the user b in the basic image T with the target face image of the user b in the image to be processed R, and replace the face image to be replaced of the user d in the basic image T with the target face image of the user d in the image to be processed S, so as to obtain the basic image T subjected to image replacement processing.
It can be understood that the face image of each user in the base image T after the image replacement process is the target face image of the user. For example, after image replacement processing, in the base image T, the face images of the users a, b, and d are the face images corresponding to the maximum value of the respective eye values, and the face image of the user c is the face image laughter the most by the user c.
In one embodiment, the terminal may also numerically represent the expression of each user. For example, the terminal may assign a positive number value for positive emotions and a negative number value for negative emotions. For positive expressions, the terminal can give different positive values according to the expression degree of the expressions. For example, for an expression with a small smile magnitude for a smile, the terminal may be assigned a positive number with a small value, and for an expression with a large smile magnitude for a smile, the terminal may be assigned a positive number with a large value. In this way, the expression of the user is expressed by the expression value.
Then, the terminal can comprehensively determine the target face image of the user from the image to be processed according to the expression value and the eye value. For example, for a facial image of a user with a changing expression, the terminal may give a larger weight to the expression value and a smaller weight to the eye value. For example, the expression value is weighted 85% and the eye value is weighted 15%.
Taking the face image of the user c as an example, the eye values of the user c in the to-be-processed image O, P, Q, R, S, T are 82, 80, 60, 50, 30 and 0, respectively. And the eye values of the user in the to-be-processed image O, P, Q, R, S, T are 10, 20, 30, 40, 50, 60, respectively. Then, the integrated value of user c in the image to be processed O is 20.8(82 × 15% +10 × 85%). Similarly, the integrated values of the image to be processed P, Q, R, S, T are 29, 34.5, 41.5, 47 and 51 in sequence. Then, since the comprehensive value of the image to be processed T is the largest, the terminal may determine the face image of the user c in the image to be processed T as the target face image of the user c. And then, the terminal determines the image to be processed containing the maximum number of the target face images as a basic image.
In an embodiment, before the step of acquiring a plurality of frames of images to be processed including a human face, the method may further include the following steps:
when the image containing the human face is collected, the terminal determines the number of target frames according to at least two collected images.
Then, the step of acquiring, by the terminal, a plurality of frames of images to be processed including faces may include: and acquiring the images to be processed with the number of the target frames from the acquired multi-frame images by the terminal.
For example, after entering the camera preview interface, if it is detected that the terminal is collecting images including faces, the terminal may determine a target frame number according to at least two collected images including faces. In one embodiment, the target frame number may be greater than or equal to 2.
For example, when the terminal acquires four frames of images containing human faces, the terminal can detect whether the positions of the human faces in the four frames of images are displaced. If the displacement does not occur or is very small, the face image in the image can be considered to be relatively stable, that is, the user does not shake or rotate the head in a large range. If the displacement occurs, the face image is considered to be unstable, that is, the user shakes or rotates the head, and the amplitude is large.
In one embodiment, whether the human face in the image is displaced or not can be detected by the following method: after the four acquired frames of images are acquired, the terminal can generate a coordinate system, and then the terminal can put each frame of image into the coordinate system in the same way. And then, the terminal can acquire the coordinates of the facial image feature points in each frame of image in the coordinate system. After the coordinates of the feature points of the face image in each frame of image in the coordinate system are obtained, the terminal can compare whether the coordinates of the feature points of the same face image in different images are the same or not. If the face images are the same, the face images in the images can be considered to be not displaced. If the difference is not the same, the face image in the image can be considered to be displaced. If the face image is detected to be displaced, the terminal can acquire a specific displacement value. If the specific displacement value is within the preset value range, the face image in the image can be considered to have smaller displacement. If the specific displacement value is outside the preset value range, the face image in the image can be considered to have larger displacement.
In one embodiment, for example, if the face image is displaced, the target frame number may be determined to be 4 frames. If the human face image is not displaced, the target frame number can be determined as 6 frames or 8 frames.
After the user presses the photographing button, the terminal can acquire the images to be processed with the target frame number from the recently acquired images.
In one embodiment, after the step of obtaining the base image subjected to the image replacement processing, the method may further include the steps of:
and according to the image to be processed, the terminal performs image noise reduction processing on the basic image subjected to the image replacement processing.
For example, after obtaining the base image subjected to the image replacement processing, the terminal may perform image denoising processing on the base image subjected to the image replacement processing according to the image to be processed. For example, the terminal may acquire a group of continuously acquired images including the base image, and perform multi-frame noise reduction processing on the base image subjected to the image replacement processing according to the group of images.
For example, since the base image is the image T, the terminal may acquire the image to be processed Q, R, S and perform the multi-frame noise reduction processing on the base image T subjected to the image replacement processing according to the image Q, R, S.
In one embodiment, the terminal may align the image Q, R, S, T and obtain the pixel values of each aligned pixel in the image when performing multi-frame denoising. If the pixel values of the same group of alignment pixels are not different, the terminal can calculate the pixel value mean value of the group of alignment pixels, and replace the pixel value of the corresponding pixel of the image T with the pixel value mean value. If the pixel values of the alignment pixels in the same group have more differences, the pixel values in the image T may not be adjusted.
For example, the pixel P1 in the image Q, the pixel P2 in the image R, the pixel P3 in the image S, and the pixel P4 in the image T are a group of mutually aligned pixels, where the pixel value of P1 is 101, the pixel value of P2 is 102, the pixel value of P3 is 103, and the pixel value of P4 is 104, and then the average of the pixel values of the group of mutually aligned pixels is 102.5, then the terminal may adjust the pixel value of the P4 pixel in the image T from 104 to 102.5, thereby performing noise reduction on the P4 pixel in the image T. If the pixel value of P1 is 80, the pixel value of P2 is 83, the pixel value of P3 is 90, and the pixel value of P4 is 103, then the pixel value of P4 may not be adjusted at this time, i.e., the pixel value of P4 remains 104, because their pixel values are more different.
Referring to fig. 3 to 5, fig. 3 to 5 are scene diagrams illustrating a method for selecting an image according to an embodiment of the present application.
In this embodiment, after entering a preview interface of a camera, if it is detected that the terminal is acquiring a face image, the terminal may acquire a current environmental parameter, and determine a target frame number according to at least two acquired face images. The environmental parameter may be ambient light level.
If the terminal determines that the face in the image is not displaced (or has small displacement) according to the collected face images in at least two frames and is currently in a bright environment, the terminal can determine the target frame number as 8 frames. If the terminal determines that the face in the image is not displaced (or has small displacement) according to the collected face images in at least two frames and is currently in a dark light environment, the terminal can determine the target frame number as 6 frames. If the terminal determines that the human face in the images is displaced according to the collected at least two frames of human face images, the terminal can determine the number of the target frames as 4 frames.
The terminal can save the collected images to a buffer queue. The buffer queue may be a fixed-length queue, for example, the buffer queue may store 10 frames of images newly acquired by the terminal.
For example, five people, i.e., A, B, C, D and E, play outside and prepare to take a picture beside a landscape. Wherein, the first user terminal takes a picture of the second user terminal, as shown in fig. 3. For example, after entering a preview interface of a camera, the terminal acquires a frame of image every 50 milliseconds according to the currently acquired environmental parameters. Before a user presses a photographing button of a camera, the terminal can acquire 4 acquired images from the cache queue, and it can be understood that the 4 images all include a face image of the user b. Then, the terminal can detect whether the position of the face image of the second frame of the 4 frames of images in the picture is displaced or not. If the displacement does not occur or is very small, the face image of the second person can be considered to be relatively stable, namely the second person does not shake or rotate the head in a large range. If the displacement occurs, the face image of b is considered to be unstable, i.e. b shakes or rotates the head, and the amplitude is large. For example, in this embodiment, the terminal detects that the position of the face image of b in the 4 frames of images in the screen has not been displaced.
Then, the terminal may obtain the current ambient light brightness, and determine whether the terminal is currently in a dark light environment according to the ambient light brightness. For example, the terminal determines that it is currently in a dim light environment.
Then, the terminal may, according to the obtained information: and the position of the face image of the second person in the picture is not displaced, and the face image is currently in a dark environment, so that a target frame number is determined. For example, the number of target frames is determined to be 6 frames.
After that, when the first presses the photographing button, the terminal can acquire 6 captured images about the second. For example, the terminal may obtain, from the buffer queue, 6 recently acquired images about b, where the 6 images are A, B, C, D, E, F, for example, in chronological order. It is understood that the six frame images A, B, C, D, E, F are the images to be processed acquired by the terminal.
After acquiring 6 frames of images, the terminal may perform facial expression recognition on the 6 frames of images, and detect an eye value of the face image in the images, where the eye value is a numerical value used for indicating the size of eyes. For example, the eye values of b in the A, B, C, D, E, F image are 80, 82, 83, 84, 85, 84, respectively. For example, the terminal detects that the expression of the user B in the 6 frames of images is not changed.
In the case that it is determined that the expression of the user b in the image to be processed is not changed, since the 6 frames of images are single images of b, the terminal may determine the frame of image with the largest eye value in the 6 frames of images as the base image, that is, the image E is determined as the base image.
After determining the image E as the base image, the terminal may perform multi-frame noise reduction processing on the image E based on the image C, D, F. After the image E is subjected to multi-frame noise reduction processing, the terminal may store the image E subjected to noise reduction processing into an album to be a photograph. It will be appreciated that image E is a large eye photograph of b taken.
And E, shooting group pictures of four people of A, B, C and D. For example, after entering a preview interface of a camera, the terminal detects that the positions of the face images of the four people including methylethyl, propyl and butyl in the acquired 4 frames of images in the picture are not displaced, and the terminal is currently in a dark light environment. Based on this, the terminal determines that the target frame number is 6 frames.
Thereafter, when the photographing button is pressed, the terminal may acquire 6 captured images about epdm, as shown in fig. 4. For example, the terminal may obtain, from the buffer queue, 6 recently acquired images of epdm. For example, the 6 images are O, P, Q, R, S, T respectively according to time sequence. It is understood that these 6 frames of images are to-be-processed images.
Then, the terminal can perform expression recognition on the facial image of each user in the 6 frames of images to be processed, so as to obtain an expression recognition result of each user. After the expression recognition result of each user in the image to be processed is obtained, the terminal can judge whether the expression of the user in the image to be processed changes or not according to the expression recognition result of each user.
For example, according to the expression recognition result, the terminal determines that the expression of the user C in the six images is from no smile to smile. The eye values of the user on the to-be-processed image O, P, Q, R, S, T are 82, 80, 60, 50, 30, 0, respectively. That is, from image O to image T, the eyes of user c become gradually smaller, but the expression of user c changes from no smile to smile. In this case, the terminal may determine the user whose expression changes as the target user. For example, the face image of the user from image O to image T is shown in fig. 5.
The terminal detects that the face image containing the smile expression exists in the face image of the user C, so that the terminal can determine the face image which is laughter most in the face image containing the smile expression as the target face image of the user C. For example, the terminal detects that the expression of user c in the image T to be processed laughs the most laughter, and then the terminal may determine the face image of user c in the image T as the target face image of user c.
In addition, the expressions of the users A, B and D in the image to be processed are not changed or are changed very slightly, so that the terminal determines the users A, B and D as non-target users. The terminal acquires that the eye values of the user A in the image to be processed O, P, Q, R, S, T are 70, 71, 72 and 73 respectively, the eye values of the user B in the image to be processed O, P, Q, R, S, T are 80, 81, 83, 82 and 82 respectively, and the eye values of the user B in the image to be processed O, P, Q, R, S, T are 82, 83, 84, 85 and 84 respectively.
Then, the terminal may determine the face image of the user a in the to-be-processed image T as a target face image of the user a, determine the face image of the user b in the to-be-processed image R as a target face image of the user b, and determine the face image of the user b in the to-be-processed image S as a target face image of the user b.
Then, the terminal can select the image to be processed containing the largest number of target face images as a basic image. For example, the image to be processed T includes two target face images (target face images of users a and c), the image to be processed R includes one target face image (target face image of user b), and the image to be processed S includes one target face image (target face image of user d). Therefore, the terminal can determine the image to be processed T as the base image.
In the basic image T, because the face images of the user b and the user d are not respective target face images, the terminal may determine the face images of the user b and the user d in the basic image T as face images to be replaced.
For example, the face image of the user b in the image to be processed R is the target face image of the user b, and the face image of the user d in the image to be processed S is the target face image of the user d, so that the terminal can obtain the face image of the user b in the image to be processed R and the face image of the user d in the image to be processed S.
Then, the terminal can replace the face image to be replaced of the user B in the basic image T with the target face image of the user B in the image R to be processed, and replace the face image to be replaced of the user D in the basic image T with the target face image of the user D in the image S to be processed, so that the basic image T subjected to image replacement processing is obtained.
It can be understood that the face image of each user in the base image T after the image replacement process is the target face image of the user. For example, after image replacement processing, in the base image T, the face images of the users a, b, and d are the face images corresponding to the maximum value of the respective eye values, and the face image of the user c is the face image laughter the most by the user c.
Thereafter, the terminal may acquire the image to be processed Q, R, S, and perform multi-frame noise reduction processing on the base image T subjected to the image replacement processing according to the image Q, R, S. Then, the terminal may store the noise-reduced image T into an album as a photograph.
It can be understood that the four people of.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image selecting device according to an embodiment of the present disclosure. The image selecting apparatus 300 may include: an obtaining module 301, an identifying module 302, and a selecting module 303.
An obtaining module 301, configured to obtain multiple frames of images to be processed including faces.
The recognition module 302 is configured to perform expression recognition on the face image of each user in the image to be processed to obtain an expression recognition result of each user.
And the selecting module 303 is configured to select a basic image from the to-be-processed image according to the expression recognition result of each user.
In one embodiment, the selecting module 303 may be configured to:
if it is determined that the expressions of all users in the images to be processed are not changed according to the expression recognition result of each user, acquiring an eye value of each user in each image to be processed, wherein the eye value is a numerical value used for representing the size of eyes;
and selecting a basic image from the images to be processed according to the eye value of each user in each image to be processed.
In one embodiment, the selecting module 303 may be configured to:
determining a target face image of each user according to the eye value of each user in each image to be processed, wherein the target face image is an image corresponding to the maximum value of the eye values of the users;
and selecting the image to be processed containing the maximum number of the target face images as a basic image.
In one embodiment, the selecting module 303 may be configured to:
if the facial image of the user with the changed expression in the image to be processed is determined according to the expression recognition result of each user, determining the user with the changed expression as a target user;
for each target user, determining a face image of which the expression meets a preset condition as a target face image of the target user;
for each non-target user, acquiring an eye value of each non-target user in each image to be processed, wherein the eye value is a numerical value used for representing the size of eyes, and determining an image corresponding to the maximum value in the eye values as a target face image of the non-target user;
and selecting the image to be processed containing the maximum number of the target face images as a basic image.
Referring to fig. 7, fig. 7 is another schematic structural diagram of an image selecting device according to an embodiment of the present disclosure. In an embodiment, the image selecting apparatus 300 may further include: an acquisition module 304 and a processing module 305.
The acquisition module 304 is configured to determine a target frame number according to at least two acquired images when an image including a human face is acquired.
Then, the obtaining module 301 may be configured to: and acquiring the images to be processed with the number of the target frames from the acquired multi-frame images.
The processing module 305 is configured to, after said step of selecting a base image from said images to be processed:
determining a face image to be replaced from a basic image, wherein the face image to be replaced is a non-target face image of a user;
acquiring a target face image for replacing each face image to be replaced from the images to be processed, wherein each face image to be replaced and the corresponding target face image are face images of the same user;
and carrying out image replacement processing on each face image to be replaced by using the corresponding target face image to obtain a basic image subjected to the image replacement processing.
In one embodiment, the processing module 305 may be further configured to: and according to the image to be processed, carrying out image noise reduction processing on the basic image subjected to the image replacement processing.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is caused to execute the steps in the image selecting method provided in this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the steps in the image selection method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 8, fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
The mobile terminal 400 may include a camera module 401, a memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 8 is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The camera module 401 may include a single camera module and a dual camera module.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the mobile terminal.
In this embodiment, the processor 403 in the mobile terminal loads the executable code corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, thereby implementing the steps:
acquiring a plurality of frames of images to be processed containing human faces; performing expression recognition on the facial image of each user in the image to be processed to obtain an expression recognition result of each user; and selecting a basic image from the image to be processed according to the expression recognition result of each user.
The embodiment of the invention also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 9 is a diagram illustrating an exemplary image processing circuit. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present invention are shown.
As shown in fig. 9, the image processing circuit includes an image signal processor 540 and a control logic 550. Image data captured by the imaging device 510 is first processed by an image signal processor 540, and the image signal processor 540 analyzes the image data to capture image statistics that may be used to determine and/or one or more control parameters of the imaging device 510. The imaging device 510 may include a camera with one or more lenses 511 and an image sensor 512. Image sensor 512 may include an array of color filters (e.g., Bayer filters), and image sensor 512 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 512 and provide a set of raw image data that may be processed by an image signal processor 540. The sensor 520 may provide raw image data to the image signal processor 540 based on the sensor 520 interface type. The sensor 520 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The image signal processor 540 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and image signal processor 540 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image signal processor 540 may also receive pixel data from the image memory 530. For example, raw pixel data is sent from the sensor 520 interface to the image memory 530, and the raw pixel data in the image memory 530 is then provided to the image signal processor 540 for processing. The image Memory 530 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 520 interface or from the image memory 530, the image signal processor 540 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 530 for additional processing before being displayed. An image signal processor 540 receives the processed data from the image memory 530 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 570 for viewing by a user and/or further processing by a graphics engine or GPU (graphics processing Unit). Further, the output of the image signal processor 540 may also be sent to the image memory 530, and the display 570 may read image data from the image memory 530. In one embodiment, image memory 530 may be configured to implement one or more frame buffers. Further, the output of the image signal processor 540 may be transmitted to an encoder/decoder 560 so as to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 570 device. The encoder/decoder 560 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the image signal processor 540 may be sent to the control logic 550. For example, the statistical data may include image sensor 512 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 511 shading correction, and the like. The control logic 550 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 510 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 520 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 511 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 511 shading correction parameters.
The following steps are steps for implementing the image processing method provided by the embodiment by using the image processing technology in fig. 9:
acquiring a plurality of frames of images to be processed containing human faces; performing expression recognition on the facial image of each user in the image to be processed to obtain an expression recognition result of each user; and selecting a basic image from the image to be processed according to the expression recognition result of each user.
In one embodiment, when the electronic device performs the step of selecting the base image from the images to be processed according to the expression recognition result of each user, the electronic device may perform: if it is determined that the expressions of all users in the images to be processed are not changed according to the expression recognition result of each user, acquiring an eye value of each user in each image to be processed, wherein the eye value is a numerical value used for representing the size of eyes; and selecting a basic image from the images to be processed according to the eye value of each user in each image to be processed.
In one embodiment, when the electronic device performs the step of selecting the base image from the images to be processed according to the eye value of each user in each image to be processed, the electronic device may perform: determining a target face image of each user according to the eye value of each user in each image to be processed, wherein the target face image is an image corresponding to the maximum value of the eye values of the users; and selecting the image to be processed containing the maximum number of the target face images as a basic image.
In one embodiment, when the electronic device performs the step of selecting the base image from the images to be processed according to the expression recognition result of each user, the electronic device may perform: if the facial image of the user with the changed expression in the image to be processed is determined according to the expression recognition result of each user, determining the user with the changed expression as a target user; for each target user, determining a face image of which the expression meets a preset condition as a target face image of the target user; for each non-target user, acquiring an eye value of each non-target user in each image to be processed, wherein the eye value is a numerical value used for representing the size of eyes, and determining an image corresponding to the maximum value in the eye values as a target face image of the non-target user; and selecting the image to be processed containing the maximum number of the target face images as a basic image.
In one embodiment, before the step of acquiring multiple frames of images to be processed including faces, the electronic device may further perform: when an image containing a human face is collected, determining the number of target frames according to at least two collected images;
then, when the electronic device performs the step of acquiring multiple frames of images to be processed including faces, it may perform: acquiring to-be-processed images with the number of the target frames from the acquired multi-frame images;
after the step of selecting the base image from the image to be processed, the electronic device may further perform: determining a face image to be replaced from a basic image, wherein the face image to be replaced is a non-target face image of a user; acquiring a target face image for replacing each face image to be replaced from the images to be processed, wherein each face image to be replaced and the corresponding target face image are face images of the same user; and carrying out image replacement processing on each face image to be replaced by using the corresponding target face image to obtain a basic image subjected to the image replacement processing.
In one embodiment, after the step of obtaining the base image subjected to the image replacement processing, the electronic device may further perform: and according to the image to be processed, carrying out image noise reduction processing on the basic image subjected to the image replacement processing.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image selection method, and are not described herein again.
The image selection device provided in the embodiment of the present application and the image selection method in the above embodiments belong to the same concept, and any method provided in the image selection method embodiment may be run on the image selection device, and a specific implementation process thereof is described in detail in the image selection method embodiment, and is not described herein again.
It should be noted that, for the image selecting method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process for implementing the image selecting method described in the embodiment of the present application may be implemented by controlling the relevant hardware through a computer program, where the computer program may be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process may include the process of the embodiment of the image selecting method described in the embodiment of the present application. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the image selecting apparatus in the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The method, the apparatus, the storage medium, and the electronic device for selecting an image provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. An image selection method is characterized by comprising the following steps:
when images containing human faces are collected, determining the number of target frames according to whether the human faces in at least two collected human face images are displaced and the brightness of environment light;
acquiring to-be-processed images with the number of the target frames from a plurality of acquired images containing human faces;
performing expression recognition on the facial image of each user in the image to be processed to obtain an expression recognition result of each user;
if it is determined that the expressions of all users in the images to be processed are not changed according to the expression recognition result of each user, acquiring an eye value of each user in each image to be processed, wherein the eye value is a numerical value used for representing the size of eyes; determining a target face image of each user according to the eye value of each user in each image to be processed, wherein the target face image is an image corresponding to the maximum value of the eye values of the users; selecting the image to be processed containing the largest number of target face images as a basic image;
or if the facial image of the user with the changed expression in the image to be processed is determined according to the expression recognition result of each user, determining the user with the changed expression as the target user; for each target user, determining a face image of which the expression meets a preset condition as a target face image of the target user; for each non-target user, acquiring an eye value of each non-target user in each image to be processed, wherein the eye value is a numerical value used for representing the size of eyes, and determining an image corresponding to the maximum value in the eye values as a target face image of the non-target user; and selecting the image to be processed containing the maximum number of the target face images as a basic image.
2. The method for selecting an image according to claim 1, wherein before the step of obtaining a plurality of frames of images to be processed including faces, the method further comprises:
when an image containing a human face is collected, determining the number of target frames according to at least two collected images;
the step of obtaining a plurality of frames of images to be processed containing human faces comprises the following steps: acquiring to-be-processed images with the number of the target frames from the acquired multi-frame images;
after the step of selecting the base image from the image to be processed, the method further comprises the following steps:
determining a face image to be replaced from a basic image, wherein the face image to be replaced is a non-target face image of a user;
acquiring a target face image for replacing each face image to be replaced from the images to be processed, wherein each face image to be replaced and the corresponding target face image are face images of the same user;
and carrying out image replacement processing on each face image to be replaced by using the corresponding target face image to obtain a basic image subjected to the image replacement processing.
3. The method for selecting an image according to claim 2, wherein after the step of obtaining the base image subjected to the image replacement processing, the method further comprises:
and according to the image to be processed, carrying out image noise reduction processing on the basic image subjected to the image replacement processing.
4. An image selecting apparatus, comprising:
the acquisition module is used for determining the number of target frames according to whether the human faces in at least two collected human face images are displaced and the ambient light brightness when the images containing the human faces are collected; acquiring to-be-processed images with the number of the target frames from a plurality of acquired images containing human faces;
the recognition module is used for carrying out expression recognition on the face image of each user in the image to be processed to obtain an expression recognition result of each user;
the selecting module is used for acquiring an eye value of each user in each image to be processed if the expression of all the users in the image to be processed is determined to be unchanged according to the expression recognition result of each user, wherein the eye value is a numerical value used for representing the size of eyes; determining a target face image of each user according to the eye value of each user in each image to be processed, wherein the target face image is an image corresponding to the maximum value of the eye values of the users; selecting the image to be processed containing the largest number of target face images as a basic image; or if the facial image of the user with the changed expression in the image to be processed is determined according to the expression recognition result of each user, determining the user with the changed expression as the target user; for each target user, determining a face image of which the expression meets a preset condition as a target face image of the target user; for each non-target user, acquiring an eye value of each non-target user in each image to be processed, wherein the eye value is a numerical value used for representing the size of eyes, and determining an image corresponding to the maximum value in the eye values as a target face image of the non-target user; and selecting the image to be processed containing the maximum number of the target face images as a basic image.
5. A storage medium having stored thereon a computer program, characterized in that the computer program, when executed on a computer, causes the computer to execute the method according to any of claims 1 to 3.
6. An electronic device comprising a memory, a processor, wherein the processor is configured to perform the method of any of claims 1 to 3 by invoking a computer program stored in the memory.
CN201810277025.4A 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment Active CN108574803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810277025.4A CN108574803B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810277025.4A CN108574803B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108574803A CN108574803A (en) 2018-09-25
CN108574803B true CN108574803B (en) 2020-01-14

Family

ID=63574060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810277025.4A Active CN108574803B (en) 2018-03-30 2018-03-30 Image selection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108574803B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259689B (en) * 2018-11-30 2023-04-25 百度在线网络技术(北京)有限公司 Method and device for transmitting information
CN111062279B (en) * 2019-12-04 2023-06-06 深圳先进技术研究院 Photo processing method and photo processing device
CN111263073B (en) * 2020-02-27 2021-11-09 维沃移动通信有限公司 Image processing method and electronic device
CN112036311A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Image processing method and device based on eye state detection and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4720810B2 (en) * 2007-09-28 2011-07-13 富士フイルム株式会社 Image processing apparatus, imaging apparatus, image processing method, and image processing program
TWI447658B (en) * 2010-03-24 2014-08-01 Ind Tech Res Inst Facial expression capturing method and apparatus therewith
CN104899544B (en) * 2014-03-04 2019-04-12 佳能株式会社 Image processing apparatus and image processing method
CN104243818B (en) * 2014-08-29 2018-02-23 小米科技有限责任公司 Image processing method, device and equipment
CN105635567A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Shooting method and device
CN107566748A (en) * 2017-09-22 2018-01-09 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN107734253B (en) * 2017-10-13 2020-01-10 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer-readable storage medium
CN107817939B (en) * 2017-10-27 2023-02-07 维沃移动通信有限公司 Image processing method and mobile terminal

Also Published As

Publication number Publication date
CN108574803A (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN109068067B (en) Exposure control method and device and electronic equipment
CN110062160B (en) Image processing method and device
CN109040609B (en) Exposure control method, exposure control device, electronic equipment and computer-readable storage medium
CN108259770B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110191291B (en) Image processing method and device based on multi-frame images
CN110166708B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN108401110B (en) Image acquisition method and device, storage medium and electronic equipment
CN108574803B (en) Image selection method and device, storage medium and electronic equipment
WO2020207261A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN108093158B (en) Image blurring processing method and device, mobile device and computer readable medium
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN111327824B (en) Shooting parameter selection method and device, storage medium and electronic equipment
CN110766621A (en) Image processing method, image processing device, storage medium and electronic equipment
CN109151333B (en) Exposure control method, exposure control device and electronic equipment
CN107563979B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110728705B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110430370B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110717871A (en) Image processing method, image processing device, storage medium and electronic equipment
CN113012081A (en) Image processing method, device and electronic system
CN108052883B (en) User photographing method, device and equipment
US11503223B2 (en) Method for image-processing and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong Opel Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant