US11295416B2 - Method for picture processing, computer-readable storage medium, and electronic device - Google Patents
Method for picture processing, computer-readable storage medium, and electronic device Download PDFInfo
- Publication number
- US11295416B2 US11295416B2 US16/693,961 US201916693961A US11295416B2 US 11295416 B2 US11295416 B2 US 11295416B2 US 201916693961 A US201916693961 A US 201916693961A US 11295416 B2 US11295416 B2 US 11295416B2
- Authority
- US
- United States
- Prior art keywords
- pictures
- preset
- picture
- target
- light sensitivity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000001815 facial effect Effects 0.000 claims abstract description 131
- 230000004044 response Effects 0.000 claims abstract description 16
- 206010034960 Photophobia Diseases 0.000 claims description 107
- 208000013469 light sensitivity Diseases 0.000 claims description 107
- 238000004590 computer program Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- This disclosure relates to the technical field of picture processing, and particularly to a method for picture processing, a computer-readable storage medium, and an electronic device.
- terminals can take pictures having a high sharpness with various image processing algorithms.
- the terminal can take a number of pictures in a high speed, and a picture having low noise can be obtained via multi-frame denoising processing.
- Implementations of the present disclosure provide a method for picture processing, a computer-readable storage medium, and an electronic device.
- the implementations of the present disclosure provide a method for picture processing.
- the method includes the following. Multiple pictures are obtained through photographing, where each of the multiple pictures contains facial information. At least two pictures are selected from the multiple pictures, and according to the facial information of each of the at least two pictures, feature information of a preset facial part in each of the at least two pictures is obtained. At least one target picture is determined in response to the feature information of the preset facial part in one of the at least two pictures indicating that the preset facial part is in a preset feature state, where the one of the at least two pictures is determined as one of the at least one target picture. Based on that the at least one target picture is embodied as multiple target pictures, a multi-frame denoising processing is performed on the target pictures to obtain an output picture.
- the implementations of the present disclosure provides a computer-readable storage medium.
- the storage medium stores computer programs.
- the computer programs are loaded and executed by a processor to implement the operations of the method for picture processing provided in the implementations of the present disclosure.
- the implementations of the present disclosure provide an electronic device.
- the electronic device includes a memory, a processor, and computer programs stored in the memory and capable of being run in the processor.
- the processor is configured to execute the computer programs to implement the following. Multiple pictures are obtained through photographing, where each of the multiple pictures contains facial information. At least two pictures are selected from the multiple pictures, and according to the facial information of each of the at least two pictures, feature information of a preset facial part in each of the at least two pictures is obtained. The at least two pictures are determined as target pictures in response to the feature information of the preset facial part in each of the at least two pictures indicating that the preset facial part is in a preset feature state. A multi-frame denoising processing is performed on the target pictures to obtain an output picture.
- FIG. 1 is a schematic flow chart illustrating a method for picture processing according to an implementation of the present disclosure.
- FIG. 2 is a schematic flow chart illustrating a method for picturing processing according to another implementation of the present disclosure.
- FIGS. 3-5 are schematic diagrams illustrating scenarios to which a method for picture processing is applied according to an implementation of the present disclosure.
- FIG. 6 is a schematic structural diagram illustrating a device for picture processing according to an implementation of the present disclosure.
- FIG. 7 is a schematic structural diagram illustrating a device for picture processing according to another implementation of the present disclosure.
- FIG. 8 is a schematic structural diagram illustrating a mobile terminal according to an implementation of the present disclosure.
- FIG. 9 is a schematic structural diagram illustrating a mobile terminal according to another implementation of the present disclosure.
- Implementations of the present disclosure provide a method for picture processing.
- the method includes the following. Multiple pictures are obtained through photographing, where each of the multiple pictures contains facial information. At least two pictures are selected from the multiple pictures, and according to the facial information of each of the at least two pictures, feature information of a preset facial part in each of the at least two pictures is obtained. At least one target picture is determined in response to the feature information of the preset facial part in one of the at least two pictures indicating that the preset facial part is in a preset feature state, the one of the at least two pictures is determined as one of the at least one target picture. Based on that the at least one target picture is embodied as multiple target pictures, a multi-frame denoising processing is performed on the target pictures to obtain an output picture.
- the preset facial part in each of the at least two pictures is an eye part in each of the at least two pictures
- the preset feature state is an eyes-open state
- the at least one target picture is determined as follows. In chronological order of taking the at least two pictures, whether the feature information of the preset facial part in each of the at least two pictures indicates that the preset facial part in each of the at least two pictures is in the preset feature state is detected. One of the at least two pictures is determined as one of the at least one target picture based on that the preset facial part in the one of the at least two pictures is in the preset feature state.
- the method further includes the following. Based on that the at least one target picture is embodied as one target picture, the one target picture is determined as the output picture.
- the method further includes the following. Searching for another target picture is stopped based on that the number of the target pictures determined reaches a preset number.
- the method may further includes the following.
- a light sensitivity value used for photographing is obtained.
- a target number corresponding to the light sensitivity value is determined, where the target number is larger than or equal to two.
- the target number is determined as the preset number.
- the method further includes the following.
- a correspondence table is set, where the correspondence table contains multiple light sensitivity value ranges and numbers each corresponding to one of the multiple light sensitivity value ranges, and where the larger light sensitivity values in one of the light sensitivity value ranges, the larger a number corresponding to the one of the light sensitivity value range.
- the target number corresponding to the light sensitivity value is determined as follows.
- the correspondence table is queried and a light sensitivity value range containing the light sensitivity value is determined according to the correspondence table.
- a number corresponding to the light sensitivity value range containing the light sensitivity value is determined as the target number.
- FIG. 1 is a schematic flow chart illustrating a method for picture processing according to an implementation of the present disclosure. The method includes the following.
- an execution subject in the implementation of the present disclosure may be a terminal device such as a smart phone, a tablet computer, or the like.
- the terminal when a multi-frame denoising function is activated, the terminal can input four pictures, and perform multi-frame synthesis on the four pictures to obtain a picture having low noise.
- a facial part in a picture obtained via the multi-frame synthesis is blur, that is, noise of the facial part in the picture is relatively high.
- the terminal can first obtain the multiple pictures through photographing, where each of the multiple pictures contains the facial information.
- the terminal activates the multi-frame denoising function.
- the terminal may take multiple pictures continuously. For example, when taking a selfie or a picture of a user's friend, the terminal may quickly and continuously take eight or ten pictures. It can be understood that the eight or ten pictures taken by the terminal each contain facial information and have the same scene information.
- At block 102 at least two pictures are selected from the multiple pictures, and according to facial information contained in each of the at least two pictures, feature information of a preset facial part in each of the at least two pictures is obtained.
- At block 103 at least one target picture is determined in response to the feature information of the preset facial part in one of the at least two pictures indicating that the preset facial part is in a preset feature state, where the one of the at least two pictures is determined as one of the at least one target picture.
- the operations at blocks 102 and 103 can include the following.
- the terminal can select at least two pictures from the eight pictures.
- the terminal can select six pictures from the eight pictures.
- the six pictures are respectively pictures A, B, C, D, E, and F.
- the terminal can then obtain, according to facial information of each of the pictures A, B, C, D, E, and F, feature information of a preset facial part in each of the pictures A, B, C, D, E, and F.
- the terminal can obtain the feature information of the preset facial part in the picture A.
- the preset facial part may be easily changed between different states, for example, may be an eye part or a mouth part.
- a person to be photographed may blink, such that eye parts in some pictures are in an eyes-closed state, while eye parts in other pictures are in an eyes-open state.
- the terminal can detect, according to the obtained feature information of the preset facial part in each of the pictures, whether the feature information of the preset facial part in each of the pictures indicates that the preset facial part in each of the pictures is in the preset feature state.
- the terminal can determine that the one picture is not a target picture. For example, when the terminal detects that the feature information of the preset facial part in the picture C indicates that the preset facial part in the picture C is not in the preset feature state, the terminal can determine that the picture C is not a target picture. That is, the terminal does not use the picture C as an input picture for multi-frame denoising processing.
- the terminal can determine the one picture as a target picture. For example, when the terminal detects that the feature information of the preset facial part in the picture A indicates that the preset facial part in the picture A is in the preset feature state, the terminal can determine the picture A as a target picture. That is, the terminal uses the picture A as an input picture for multi-frame denoising processing.
- the multi-frame denoising processing is performed on the target pictures to obtain an output picture.
- the pictures A, B, C, D, E, and F are determined as target pictures.
- the terminal can perform the multi-frame denoising processing on the pictures A, B, D, and E, and obtain a picture via multi-frame synthesis, that is, an output picture.
- the preset facial parts are all in the preset feature state, that is, the features of the preset facial parts are consistent. Compared with that the features of the preset facial parts are not consistent, in the implementation of the present disclosure, noise of the preset facial part in the output picture obtained via performing the multi-frame denoising processing on the target pictures is relatively low.
- FIG. 2 is a schematic flow chart illustrating a method for picture processing according to an implementation of the present disclosure. The method includes the following.
- a terminal obtains multiple pictures through photographing, where each of the multiple pictures contains facial information.
- the terminal may quickly and continuously take multiple pictures, for example, eight pictures.
- the terminal can obtain the eight pictures taken, where the eight pictures each contain facial information of the user's friend.
- the terminal selects at least two pictures from the multiple pictures, and obtains, according to facial information of each of the at least two pictures, feature information of an eye part in each of the at least two pictures.
- the terminal can select at least two pictures from the eight pictures. In the implementation of the present disclosure, the terminal can select all the eight pictures.
- the terminal can obtain, according to the facial information of each of the eight pictures, the feature information of the eye part in each of the eight pictures.
- the eight pictures are respectively the pictures A, B, C, D, E, F, G, and H.
- the terminal can obtain, according to the facial information of the picture A, the feature information of the eye part in the picture A.
- the terminal can obtain, according to the facial information of the picture B, the feature information of the eye part in the picture B, and so on.
- the terminal detects, in chronological order of taking the at least two pictures, whether the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two pictures is in an eyes-open state.
- the terminal determines at least one target picture in response to the feature information of the eye part in one of the at least two pictures indicating that the eye part is in the eyes-open state, where the terminal determines the one of the at least two pictures as one of the at least one target picture.
- the operations at blocks 203 and 204 may include the following. After obtaining the feature information of the eye part in each of the at least two pictures, in chronological order of taking the at least two pictures, the terminal can detect whether the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two parts is in the eyes-open state, that is, the eyes are not closed.
- the terminal can first detect whether the feature information of the eye part in the picture A indicates that the eye part in the picture A is in the eyes-open state.
- the terminal can determine the picture A as a target picture.
- the terminal can determine that the picture A is not a target picture. That is, the terminal does not use the picture A as an input picture for a multi-frame denoising processing.
- the terminal detects whether the feature information of the eye part in the picture B indicates that the eye part in the picture B is in the eyes-open state. If the feature information of the eye part in the picture B indicates that the eye part in the picture B is in the eyes-open state, the terminal can determine the picture B as a target picture. If the feature information of the eye part in the picture B indicates that the eye part in the picture B is not in the eyes-open state, the terminal can determine that the picture B is not a target picture.
- the terminal can detect the pictures C, D, E, F, G, and H in sequence.
- the terminal stops searching for a target picture.
- the terminal can further detect whether the number of the target pictures determined reaches the preset number.
- the terminal can stop searching for a target picture.
- the terminal can further search for a target picture.
- the preset number is four. That is, during the process of sequentially detecting whether the eye parts in the pictures A, B, C, D, E, F, G, and H are in the eyes-open state, if the terminal detects that the number of the target pictures determined by the terminal reaches four, the terminal can stop searching for a target picture.
- the terminal determines the pictures A and B as target pictures, determines that the pictures C and D are not target pictures, and determines the pictures E and F as target pictures.
- the terminal can detect that the number of the target pictures determined by the terminal reaches four, and then the terminal can stop searching for a target picture. That is, the terminal can stop detecting whether the eye parts in the pictures G and H are in the eyes-open state.
- the terminal can further search for a target picture until that the picture H is detected.
- the terminal performs the multi-frame denoising processing on the target pictures to obtain an output picture.
- the terminal can perform the multi-frame denoising processing on the target pictures to obtain the output picture.
- the terminal can perform the multi-frame denoising processing on the target pictures A, B, E, and F to obtain an output picture I.
- the terminal determines that the eye parts in the target pictures A, B, E and F are all in the eyes-open state, the features of the eye parts in the target pictures A, B, E, and F are consistent.
- the facial features are not consistent (for example, the multi-frame denoising processing are performed on the pictures A, B, C, and D, where the eye parts in the pictures A and B are in the eyes-open state, and the eye parts in the pictures C and D are not in the eyes-open state)
- noise of the facial part in the output picture obtained by performing the multi-frame denoising processing on the target pictures A, B, E, and F is relatively low, and thus the output picture is of high sharpness.
- the preset number at block 205 can be determined as follows.
- the terminal obtains a light sensitivity value used for photographing.
- the terminal determines a target number corresponding to the light sensitivity value, where the target number is larger than or equal to two.
- the terminal determines the target number as the preset number.
- the terminal can use different light sensitivity values to take photos.
- the larger a light sensitivity value the higher image noise of a picture taken by the terminal, and the lower the quality of the picture.
- the terminal can use different numbers of pictures as input pictures when performing the multi-frame denoising processing.
- the terminal can set a correspondence table in advance.
- the correspondence table contains multiple light sensitivity value ranges and numbers each corresponding to one of the light sensitivity value ranges.
- the larger light sensitivity values in one light sensitivity value range the larger a number corresponding to the one light sensitivity value range.
- a number corresponding to a light sensitivity value range [200, 800) is four
- a number corresponding to a light sensitivity value range [800, 1000] is five
- a number corresponding to a light sensitivity value range in which a lower limit is larger than 1000 is six, and so on.
- the terminal can first obtain the light sensitivity value used for photographing.
- the terminal queries the correspondence table to determine the target number corresponding to the light sensitivity value, and determines the target number as the preset number.
- the preset number determined by the terminal represents the number of target pictures needed to be obtained when performing the multi-frame denoising processing.
- the terminal can further perform the following.
- the terminal sets the correspondence table, where the correspondence table records multiple light sensitivity value ranges and numbers each corresponding to one of the light sensitivity value ranges, and the larger light sensitivity values in one of the light sensitivity value ranges, the larger a number corresponding to the one of the light sensitivity value ranges.
- the terminal can determine the target number corresponding to the light sensitivity value as follows.
- the terminal queries the correspondence table and determines, according to the correspondence table, a light sensitivity value range containing the light sensitivity value.
- the terminal determines a number corresponding to the light sensitivity value range containing the light sensitivity value as the target number.
- the light sensitivity value used for photographing is 700
- the light sensitivity value falls within a range of [200, 800)
- a number corresponding to the range is four.
- the terminal can then determine that the target number is four. That is, the terminal needs to obtain four pictures for the multi-frame denoising processing.
- the light sensitivity value used for photographing is 800
- the light sensitivity value falls within a range of [800, 1000]
- a number corresponding to the range is five.
- the terminal can then determine that the target number is five. That is, the terminal needs to obtain five pictures for the multi-frame denoising processing.
- the following condition may occur, i.e., the number of the target pictures determined by the terminal is smaller than the preset number. For example, when the light sensitivity value used for photographing is 800, the terminal needs to obtain five pictures for performing the multi-frame denoising processing. However, the terminal only determines four target pictures. Under this condition, the terminal can only use four pictures to perform the multi-frame denoising processing. That is, under the condition that the number of the target pictures is smaller than the present number, if the number of the target pictures is not smaller than two, the terminal can still perform the multi-frame denoising processing.
- the method of the implementation of the present disclosure may further includes the following. Based on that the at least one target picture is embodied as one target picture, the terminal can determine the one target picture as the output picture.
- the terminal needs to obtain five target pictures for performing the multi-frame denoising processing.
- the terminal only determines one target picture. For example, among the eight pictures taken by the terminal, the eye part in only one picture is in the eyes-open state. Under this condition, since there is only one target picture, the terminal can directly determine the target picture as the output picture.
- the terminal determines only one target picture an eye part in which is in the eyes-open state or no target picture is determined by the terminal, it can be determined that a picture in which an eye part is in an eyes-closed state needs to be taken. At this point, the terminal can select multiple pictures each containing an eye part in the eyes-closed state for performing multi-frame synthesis, and output a picture containing an eye part in the eyes-closed state.
- FIGS. 3-5 are schematic views illustrating scenarios to which a method for picture processing is applied according to an implementation of the present disclosure.
- the multi-frame denoising function of the terminal is activated.
- the terminal can continuously take eight pictures of the user's friend. In chronological order of photographing, the pictures A, B, C, D, E, F, G, and H are taken sequentially. In addition, the terminal determines that the light sensitivity value used for photographing is 600, and according to the light sensitivity value, the terminal determines that four target pictures need to be obtained for the multi-frame denoising processing.
- the terminal can obtain the eight pictures A, B, C, D, E, F, G, and H. Then, the terminal selects all the eight pictures. The terminal can then determine, with face recognition technology, a facial region in each of the eight pictures, then determines an eye part in each of the eight pictures, and then extracts feature information of the eye part in each of the eight pictures. As illustrated in FIG. 3 , one of the eight pictures is taken as an example.
- the terminal can detect, in chronological order of photographing, whether the feature information of the eye part in each of the eight pictures indicates that the eye part in each of the eight pictures is in the eyes-open state.
- the eye part being in the eyes-closed state means that eyes are closed
- the eye part being in the eyes-open state means that the eyes are opened.
- the terminal can determine the one picture as a target picture. If the feature information of the eye part in one picture indicates that the eye part in the one picture is in the eyes-closed state, the terminal can determine that the one picture is not a target picture.
- the terminal detects that the feature information of the eye part in the picture A indicates that the eye part in the picture A is in the eyes-open state, and then the terminal determines the picture A as a target picture, as illustrated in FIG. 5 .
- the terminal For the picture C, according to the feature information of the eye part in the picture C, the terminal detects that the feature information of the eye part in the picture C indicates that the eye part in the picture C is in the eyes-closed state, and then the terminal discards the picture C, that is, determines that the picture C is not a target picture, as illustrated in FIG. 4 .
- the terminal can calculate the number of the target pictures determined by the terminal. If the number of the target pictures determined by the terminal reaches four, the terminal can stop searching for a target picture. If the number of the target pictures determined by the terminal does not reach four, the terminal can further search for a target picture.
- the terminal determines the pictures A and B as target pictures, determines that the pictures C and D are not target pictures, and determines the pictures E and F as target pictures. After determining the picture F as the target picture, the terminal detects that the number of the target pictures reaches four, and then the terminal can stop searching for a target picture, that is, the terminal does not detect whether the pictures G and H are target pictures.
- the terminal After determining the pictures A, B, E, and F as target pictures, the terminal can perform the multi-frame denoising processing on the four pictures to obtain an output picture.
- FIG. 6 is a schematic structural diagram illustrating a device for picture processing according to an implementation of the present disclosure.
- the device 300 for picture processing includes a first obtaining module 301 , a second obtaining module 302 , a detecting module 303 , and an outputting module 304 .
- the first obtaining module 301 is configured to obtain multiple pictures through photographing, where each of the multiple pictures contains facial information.
- the terminal may take multiple pictures continuously, for example, eight pictures. Thereafter, the first obtaining module 301 of the terminal can obtain the eight pictures, where each of the eight pictures contains facial information of the user's friend.
- the second obtaining module 302 is configured to select at least two pictures from the multiple pictures, and obtain, according to facial information of each of the at least two pictures, feature information of a preset facial part in each of the at least two pictures.
- the detecting module 303 is configured to determine at least one target picture in response to the feature information of the preset facial part in one of the at least two pictures indicating that the preset facial part is in a preset feature state, where the one of the at least two pictures is determined as one of the at least one target picture.
- the second obtaining module 302 can select at least two pictures from the eight pictures. For example, the second obtaining module 302 can first select six pictures from the eight pictures. For example, the six pictures are respectively the pictures A, B, C, D, E, and F.
- the second obtaining module 302 can then obtain, according to facial information of each of the pictures A, B, C, D, E, and F, feature information of a preset facial part in each of the pictures A, B, C, D, E, and F. For example, according to the facial information of the picture A, the second obtaining module 302 can obtain the feature information of the preset facial part in the picture A.
- the preset facial part may be easily changed between different states, for example, may be an eye part or a mouth part.
- a person to be photographed may blink, such that eye parts in some pictures are in an eyes-closed state, while eye parts in other pictures are in an eyes-open state.
- the detecting module 303 can detect, according to the obtained feature information of the preset facial part in each of the pictures, whether the feature information of the preset facial part in each of the pictures indicates that the preset facial part in each of the pictures is in the preset feature state.
- the terminal can determine that the one picture is not a target picture. For example, when the detecting module 303 detects that the feature information of the preset facial part in the picture C indicates that the preset facial part in the picture C is not in the preset feature state, the terminal can determine that the picture C is not a target picture. That is, the terminal does not use the picture C as an input picture for multi-frame denoising processing.
- the detecting module 303 can determine the one picture as a target picture. For example, when the detecting module 303 detects that the feature information of the preset facial part in the picture A indicates that the preset facial part in the picture A is in the preset feature state, the detecting module 303 can determine the picture A as a target picture. That is, the detecting module 303 uses the picture A as an input picture for multi-frame denoising processing.
- the outputting module 304 is configured to perform, based on that the at least one target picture is embodied as a plurality of target pictures, the multi-frame denoising processing on the target pictures to obtain an output picture.
- the pictures A, B, C, D, E, and F are determined as the target pictures by the detecting module 303 .
- the outputting module 304 can perform the multi-frame denoising processing on the pictures A, B, D, and E, and obtain a picture via multi-frame synthesis, that is, the output picture.
- the second obtaining module 302 can be configured to obtain, according to facial information of each of the at least two pictures, feature information of an eye part in each of the at least two pictures.
- the detecting module 303 can be configured to determine one of the at least two pictures as a target picture when the feature information of the eye part in the one of the at least two pictures is in the eyes-open state.
- the second obtaining module 302 can obtain the facial information of each of the at least two pictures, and obtain the feature information of the eye part in each of the at least two pictures. Thereafter, the detecting module 303 can detect whether the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two pictures is in the eyes-open state.
- the detecting module 303 can determine the one of the at least two pictures as a target picture.
- the detecting module 303 can determine that the one of the at least two pictures is not a target picture. That is, the terminal does not use the one of the at least two pictures as an input picture for multi-frame denoising processing.
- the detecting module 303 is configured to detect, in chronological order of taking the at least two pictures, whether the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two pictures is in an eyes-open state, and determine the at least two pictures as target pictures when the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two pictures is in the eyes-open state.
- the detecting module 303 can detect, in chronological order of taking the at least two pictures, whether the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two parts is in the eyes-open state.
- the detecting module 303 can first detect whether the feature information of the eye part in the picture A indicates that the eye part in the picture A is in the eyes-open state.
- the detecting module 303 can determine the picture A as a target picture.
- the detecting module 303 can determine that the picture A is not a target picture. That is, the terminal does not use the picture A as an input picture for the multi-frame denoising processing.
- the terminal can detect the pictures B, C, D, E, F, G, and H in sequence.
- FIG. 7 is a schematic structural diagram illustrating a device for picture processing according to an implementation of the present disclosure.
- the device 300 for picture processing may further include a stop searching module 305 , a first determining module 306 , and a second determining module 307 .
- the stop searching module 305 is configured to stop searching for a target picture when the number of the target pictures determined reaches a preset number.
- the stop searching module 305 can further detect whether the number of the target pictures determined reaches the preset number.
- the stop searching module 305 can stop searching for a target picture.
- the terminal can further search for a target picture.
- the preset number is four. That is, during the process of sequentially detecting whether the eye parts in the pictures A, B, C, D, E, F, G, and H are in the eyes-open state, if the stop searching module 305 detects that the number of the target pictures determined by the terminal reaches four, the stop searching module 305 can stop searching for a target picture.
- the first determining module 306 is configured to obtain a light sensitivity value used for photographing, determine a target number corresponding to the light sensitivity value, and determine the target number as the preset number, where the target number is larger than or equal to two.
- the terminal can use different light sensitivity values to take photos.
- the larger a light sensitivity value the higher image noise of a picture taken by the terminal, and the lower the quality of the picture.
- the terminal can use different numbers of pictures as input pictures when performing the multi-frame denoising processing.
- the terminal can set a correspondence table in advance.
- the correspondence table contains multiple light sensitivity value ranges and numbers each corresponding to one of the light sensitivity value ranges.
- the larger light sensitivity values in one light sensitivity value range the larger a number corresponding to the one light sensitivity value range.
- a number corresponding to a light sensitivity value range [200, 800) is four
- a number corresponding to a light sensitivity value range [800, 1000] is five
- a number corresponding to a light sensitivity value range in which a lower limit is larger than 1000 is six, and so on.
- the first determining module 306 can first obtain the light sensitivity value used for photographing. The first determining module 306 then queries the correspondence table to determine the target number corresponding to the light sensitivity value, and determines the target number as the preset number.
- the preset number determined by the terminal represents the number of target pictures needed to be obtained when performing the multi-frame denoising processing.
- the terminal needs to obtain four pictures for performing the multi-frame denoising processing.
- the terminal needs to obtain five pictures for performing the multi-frame denoising processing.
- the first determining module 306 is further configured to set the correspondence table, where the correspondence table records multiple light sensitivity value ranges and numbers each corresponding to one of the light sensitivity value ranges, and the larger light sensitivity values in one light sensitivity value range, the larger a number corresponding to the one light sensitivity value range.
- the first determining module 306 configured to determine the target number corresponding to the light sensitivity value is configured to query the correspondence table, determine, according to the correspondence table, a light sensitivity value range containing the light sensitivity value, and determine a number corresponding to the light sensitivity value range containing the light sensitivity value as the target number.
- the second determining module 307 is configured to determine, based on that the at least one target picture is embodied as one target picture, the one target picture as the output picture.
- the terminal needs to obtain five target pictures for performing the multi-frame denoising processing.
- the detecting module 303 only determines one target picture. For example, among the eight pictures taken by the terminal, the eye part in only one picture is in the eyes-open state. Under this condition, since there is only one target picture, the second determining module 307 can directly determine the target picture as the output picture.
- Implementations of the present disclosure further provides a computer-readable storage medium.
- the computer-readable storage medium stores computer programs.
- the computer programs are loaded and executed by a processor to implement the method for picture processing provided in the implementations of the present disclosure.
- Implementations of the present disclosure further provides an electronic device.
- the electronic device includes a memory, a processor, and computer programs stored in the memory and capable of being run in the processor.
- the processor is configured to execute the computer programs to implement the method for picture processing according to an implementation of the present disclosure.
- the electronic device may be a mobile terminal such as a tablet computer, a mobile phone and the like.
- FIG. 8 is a schematic structural diagram illustrating a mobile terminal according to an implementation of the present disclosure.
- a mobile terminal 400 may include a display unit 401 , a memory 402 , a processor 403 and the like. Those skilled in the art can understand that a structure of the mobile terminal illustrated in FIG. 8 does not constitute any limitation on the mobile terminal.
- the mobile terminal may include more or fewer components than illustrated, a combination of some components, or different component arrangements.
- the display unit 401 may be configured to display images and texts, for example, may be a display screen and the like.
- the memory 402 is configured to store application programs and data.
- the application programs stored in the memory 402 contain executable codes.
- the application programs can form various functional modules.
- the processor 403 runs the application programs stored in the memory 402 to perform various functional application and data processing.
- the processor 403 is a control center of the mobile terminal.
- the processor 403 is coupled to various parts of the whole mobile terminal through various interfaces and lines, runs or executes the application programs stored in the memory 402 and invokes data stored in the memory 402 to perform various functions of the mobile terminal and process data, thereby monitoring the mobile terminal as a whole.
- the processor 403 of the mobile terminal can load, according to following instructions, executable codes corresponding to one or more application programs to the memory 402 .
- the processor 403 runs the application programs stored in the memory 402 to realize the following.
- Multiple pictures are obtained through photographing, where each of the multiple pictures contains facial information.
- At least two pictures are selected from the multiple pictures, and according to the facial information of each of the at least two pictures, feature information of a preset facial part in each of the at least two pictures is obtained.
- At least one target picture is determined in response to the feature information of the preset facial part in one of the at least two pictures indicating that the preset facial part is in a preset feature state, where the one of the at least two pictures is determined as one of the at least one target picture.
- a multi-frame denoising processing is performed on the target pictures to obtain an output picture.
- FIG. 9 is a schematic structural view illustrating a mobile terminal according to another implementation of the present disclosure.
- a terminal 500 may include a memory 501 including one or more computer-readable storage mediums, an input unit 502 , a display unit 503 , a camera unit 504 , a processor 505 including one or more processing cores, a power source 506 , and the like.
- a structure of the mobile terminal illustrated in FIG. 9 does not constitute any limitation on the mobile terminal.
- the mobile terminal may include more or fewer components than illustrated, a combination of some components, or different component arrangements.
- the memory 501 is configured to store application programs and data.
- the application programs stored in the memory 501 contain executable program codes.
- the application programs can form various functional modules.
- the processor 505 is configured to execute various function applications and data processing by running the application programs stored in the memory 501 .
- the input unit 502 may be configured to receive input digital or character information or user's characteristic information (for example, fingerprint), and generate keyboard signal input, mouse signal input, joy stick signal input, optical or trackball signal input associated with user's setting and functional control.
- the input unit 502 may include a touch sensitive surface and other input devices.
- the display unit 503 is configured to display information input in response to the user's operation or information provided for the user or various graphical user interface of the terminal.
- the graphical user interface may be consisted of graphics, texts, icons, videos, or any combination thereof.
- the display unit 503 may include a display panel.
- the camera unit 504 can be configured to take static images, videos, and the like.
- the camera unit 504 can use a camera to take images.
- a photosensitive component circuit and a control component of the camera can process an image and convert the image processed to digital signals that can be recognized by the electronic device.
- the processor 505 is a control center of the mobile terminal.
- the processor 505 is coupled to various parts of the whole mobile terminal through various interfaces and lines, runs or executes application programs stored in the memory 501 and invokes data stored in the memory 501 to perform various functions of the mobile terminal and process data, thereby monitoring the mobile terminal as a whole.
- the mobile terminal may further include a power source 506 (e.g., a battery) that supplies power to various components.
- a power source 506 e.g., a battery
- the power source 506 may be logically connected to the processor 505 via a power management system to realize management of charging, discharging, and power consumption through the power management system.
- the mobile terminal may further include a wireless fidelity (Wi-Fi) module, a Bluetooth module, and so on, and details are not repeated herein.
- Wi-Fi wireless fidelity
- Bluetooth Bluetooth
- the processor 505 of the mobile terminal can load, according to following instructions, executable program codes corresponding to one or more application programs to the memory 501 .
- the processor 505 runs the application programs stored in the memory 501 to realize the following.
- Multiple pictures are obtained through photographing, where each of the pictures contains facial information.
- At least two pictures are selected from the multiple pictures, and according to the facial information of each of the at least two pictures, feature information of a preset facial part in each of the at least two pictures is obtained.
- At least one target picture is determined in response to the feature information of the preset facial part in one of the at least two pictures indicating that the preset facial part is in a preset feature state, where the one of the at least two pictures is determined as one of the at least one target picture.
- a multi-frame denoising processing is performed on the target pictures to obtain an output picture.
- the processor 505 configured to obtain, according to the facial information of each of the at least two pictures, the feature information of the preset facial part in each of the at least two pictures is configured to obtain, according to the facial information of each of the at least two pictures, feature information of an eye part in each of the at least two pictures.
- the processor 505 configured to determine the at least two pictures as target pictures based on that the feature information of the preset facial part in each of the at least two pictures indicates that the preset facial part in each of the at least two pictures is in the preset feature state is configured to determine the at least two pictures as the target pictures based on that the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two pictures is in an eyes-open state.
- the processor 505 configured to determine the at least two pictures as the target pictures based on that the feature information of the preset facial part in each of the at least two pictures indicates that the eye part in each of the at least two pictures is in the eyes-open state is configured to detect, in chronological order of taking the at least two pictures, whether the feature information of the eye part in each of the at least two pictures indicates that the eye part in each of the at least two pictures is in the eyes-open state, and determine one of the at least two pictures as a target picture based on that the feature information of the eye part in the one of the at least two pictures indicates that the eye part in the one of the at least two pictures is in the eyes-open state.
- the processor 505 is further configured to stop searching for a target picture based on that the number of the target pictures determined reaches a preset number.
- the processor 505 is further configured to obtain a light sensitivity value used for photographing, determine a target number corresponding to the light sensitivity value, and determine the target number as the preset number, where the target number is larger than or equal to two.
- the processor 505 is further configured to determine, based on that the at least one target picture is embodied as one target picture, the one target picture as the output picture.
- the processor 505 is further configured to set a correspondence table, where the correspondence table contains multiple light sensitivity value ranges and numbers each corresponding to one of the multiple light sensitivity value ranges, and where the larger light sensitivity values in one light sensitivity value range, the larger a number corresponding to the one light sensitivity value range.
- the processor 505 configured to determine the target number corresponding to the light sensitivity value is configured to query the correspondence table and determine a light sensitivity value range containing the light sensitivity value according to the correspondence table, and determine a number corresponding to the light sensitivity value range containing the light sensitivity value as the target number.
- each implementation has its own emphasis.
- the device for picture processing provided in the implementation of the present disclosure and the methods for picture processing described above have the same conception.
- the device for picture processing can run any of the methods for picture processing provided in the implementations, and for details reference can be made to the methods for picture processing provided in the implementations, and details are not repeated herein.
- the computer program may be stored in a computer-readable storage medium such as a memory and executed by at least one processor.
- the storage medium may be a disc or a CD, a read only memory (ROM), a random access memory (RAM), and so on.
- various functional modules can be integrated in a processing chip, or may separately and physically exist, or two or more modules may be integrated in one module.
- the modules integrated can be implemented in hardware form or in software functional module form.
- the modules integrated may be stored in a computer-readable storage medium if implemented in the software functional module form and sold or used as an independent product.
- the storage medium for example may be a ROM, a disc or CD, or the like.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710400970.4 | 2017-05-31 | ||
CN201710400970.4A CN107180417B (zh) | 2017-05-31 | 2017-05-31 | 照片处理方法、装置、计算机可读存储介质及电子设备 |
PCT/CN2018/089115 WO2018219304A1 (fr) | 2017-05-31 | 2018-05-31 | Procédé et appareil de traitement d'image, support de stockage lisible par ordinateur, et dispositif électronique |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/089115 Continuation WO2018219304A1 (fr) | 2017-05-31 | 2018-05-31 | Procédé et appareil de traitement d'image, support de stockage lisible par ordinateur, et dispositif électronique |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200090310A1 US20200090310A1 (en) | 2020-03-19 |
US11295416B2 true US11295416B2 (en) | 2022-04-05 |
Family
ID=59835038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/693,961 Active 2039-02-25 US11295416B2 (en) | 2017-05-31 | 2019-11-25 | Method for picture processing, computer-readable storage medium, and electronic device |
Country Status (4)
Country | Link |
---|---|
US (1) | US11295416B2 (fr) |
EP (1) | EP3617990B1 (fr) |
CN (1) | CN107180417B (fr) |
WO (1) | WO2018219304A1 (fr) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107180417B (zh) | 2017-05-31 | 2020-04-10 | Oppo广东移动通信有限公司 | 照片处理方法、装置、计算机可读存储介质及电子设备 |
CN108335278B (zh) * | 2018-03-18 | 2020-07-07 | Oppo广东移动通信有限公司 | 图像的处理方法、装置、存储介质及电子设备 |
CN113808066A (zh) * | 2020-05-29 | 2021-12-17 | Oppo广东移动通信有限公司 | 图像选取方法、装置、存储介质及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090060383A1 (en) | 2007-08-27 | 2009-03-05 | Arcsoft, Inc. | Method of restoring closed-eye portrait photo |
US20090185760A1 (en) * | 2008-01-18 | 2009-07-23 | Sanyo Electric Co., Ltd. | Image Processing Device and Method, and Image Sensing Apparatus |
US20150317512A1 (en) * | 2014-04-30 | 2015-11-05 | Adobe Systems Incorporated | Method and apparatus for mitigating face aging errors when performing facial recognition |
CN106127698A (zh) | 2016-06-15 | 2016-11-16 | 深圳市万普拉斯科技有限公司 | 图像降噪处理方法和装置 |
CN106250426A (zh) | 2016-07-25 | 2016-12-21 | 深圳天珑无线科技有限公司 | 一种照片处理方法和终端 |
CN107180417A (zh) | 2017-05-31 | 2017-09-19 | 广东欧珀移动通信有限公司 | 照片处理方法、装置、计算机可读存储介质及电子设备 |
-
2017
- 2017-05-31 CN CN201710400970.4A patent/CN107180417B/zh active Active
-
2018
- 2018-05-31 EP EP18810707.2A patent/EP3617990B1/fr active Active
- 2018-05-31 WO PCT/CN2018/089115 patent/WO2018219304A1/fr unknown
-
2019
- 2019-11-25 US US16/693,961 patent/US11295416B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090060383A1 (en) | 2007-08-27 | 2009-03-05 | Arcsoft, Inc. | Method of restoring closed-eye portrait photo |
US20090185760A1 (en) * | 2008-01-18 | 2009-07-23 | Sanyo Electric Co., Ltd. | Image Processing Device and Method, and Image Sensing Apparatus |
US8315474B2 (en) | 2008-01-18 | 2012-11-20 | Sanyo Electric Co., Ltd. | Image processing device and method, and image sensing apparatus |
US20150317512A1 (en) * | 2014-04-30 | 2015-11-05 | Adobe Systems Incorporated | Method and apparatus for mitigating face aging errors when performing facial recognition |
CN106127698A (zh) | 2016-06-15 | 2016-11-16 | 深圳市万普拉斯科技有限公司 | 图像降噪处理方法和装置 |
CN106250426A (zh) | 2016-07-25 | 2016-12-21 | 深圳天珑无线科技有限公司 | 一种照片处理方法和终端 |
CN107180417A (zh) | 2017-05-31 | 2017-09-19 | 广东欧珀移动通信有限公司 | 照片处理方法、装置、计算机可读存储介质及电子设备 |
Non-Patent Citations (7)
Title |
---|
EPO, Office Action for EP Application No. 18810707.2, dated Nov. 25, 2020. |
EPO, Office Action for EP Application No. 18810707.2, Feb. 27, 2020. |
IPIN, First Office Action for IN Application No. 201917048852, dated Apr. 5, 2021. |
SIPO, First Office Action for CN Application No. 201710400970.4, dated Apr. 10, 2019. |
SIPO, Second Office Action for CN Application No. 201710400970, dated Aug. 6, 2019. |
Wheeler et al., "Multi-Frame Image Restoration for Face Recognition," IEEE Workshop on Signal Processing Applications for Public Security and Forensics, Apr. 2007, 6 pages. |
WIPO, ISR for PCT/CN2018/089115, Aug. 8, 2018. |
Also Published As
Publication number | Publication date |
---|---|
US20200090310A1 (en) | 2020-03-19 |
EP3617990A1 (fr) | 2020-03-04 |
CN107180417A (zh) | 2017-09-19 |
CN107180417B (zh) | 2020-04-10 |
EP3617990B1 (fr) | 2022-05-04 |
EP3617990A4 (fr) | 2020-04-01 |
WO2018219304A1 (fr) | 2018-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7058760B2 (ja) | 画像処理方法およびその、装置、端末並びにコンピュータプログラム | |
US11295416B2 (en) | Method for picture processing, computer-readable storage medium, and electronic device | |
US20210358523A1 (en) | Image processing method and image processing apparatus | |
US20230245441A9 (en) | Image detection method and apparatus, and electronic device | |
CN108132790B (zh) | 检测无用代码的方法、装置及计算机存储介质 | |
WO2019105457A1 (fr) | Procédé de traitement d'image, dispositif informatique et support d'informations lisible par ordinateur | |
CN110059686B (zh) | 字符识别方法、装置、设备及可读存储介质 | |
US20210335391A1 (en) | Resource display method, device, apparatus, and storage medium | |
US11386586B2 (en) | Method and electronic device for adding virtual item | |
WO2019128450A1 (fr) | Procédé de capture et produits associés | |
US11551452B2 (en) | Apparatus and method for associating images from two image streams | |
CN110728167A (zh) | 文本检测方法、装置及计算机可读存储介质 | |
US10832369B2 (en) | Method and apparatus for determining the capture mode following capture of the content | |
US20240202950A1 (en) | Image processing method, electronic device, storage medium, and program product | |
US11861141B2 (en) | Screenshot capture based on content type | |
KR20200101055A (ko) | 콘텐츠와 관련된 시각적 객체를 표시하는 방법 및 이를 지원하는 전자 장치 | |
CN113361376B (zh) | 获取视频封面的方法、装置、计算机设备及可读存储介质 | |
WO2021087773A1 (fr) | Procédé et appareil de reconnaissance, dispositif électronique et support de stockage | |
CN117152022B (zh) | 图像处理方法及电子设备 | |
CN113749614B (zh) | 皮肤检测方法和设备 | |
CN117857915A (zh) | 一种拍照方法、拍照装置及电子设备 | |
CN117636411A (zh) | 人脸检测方法、装置、电子设备、存储介质及芯片系统 | |
CN104331213A (zh) | 一种信息处理方法及电子设备 | |
CN114143454A (zh) | 拍摄方法、装置、电子设备及可读存储介质 | |
CN111899154A (zh) | 漫画视频生成方法及漫画生成方法、装置、设备、介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TAN, GUOHUI;REEL/FRAME:051104/0422 Effective date: 20191008 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |