CN107633499B - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN107633499B
CN107633499B CN201710889988.5A CN201710889988A CN107633499B CN 107633499 B CN107633499 B CN 107633499B CN 201710889988 A CN201710889988 A CN 201710889988A CN 107633499 B CN107633499 B CN 107633499B
Authority
CN
China
Prior art keywords
image
face
color component
centroid
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710889988.5A
Other languages
Chinese (zh)
Other versions
CN107633499A (en
Inventor
周海涛
王健
郭子青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710889988.5A priority Critical patent/CN107633499B/en
Publication of CN107633499A publication Critical patent/CN107633499A/en
Application granted granted Critical
Publication of CN107633499B publication Critical patent/CN107633499B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method and a related product, wherein the method comprises the following steps: acquiring a face image in a dark vision environment; acquiring a color component image of a preset face template; and carrying out image fusion on the face image and the color component image to obtain a target face image. According to the embodiment of the invention, after the face image is obtained, the color component image is obtained from the preset face template, and the color of the template is used for making up the shortage of the color information of the face image collected under the scotopic vision, so that the face image with rich color information is obtained, and the user experience is improved.

Description

Image processing method and related product
Technical Field
The invention relates to the technical field of mobile terminals, in particular to an image processing method and a related product.
Background
With the widespread application of mobile terminals (mobile phones, tablet computers, etc.), the applications that the mobile terminals can support are increasing, the functions are increasing, and the mobile terminals are developing towards diversification and individuation, and become indispensable electronic products in the life of users.
At present, face recognition is more and more favored by mobile terminal manufacturers, and the captured faces can be displayed on a display screen of the mobile terminal under the condition of passing. However, in a dark vision environment, because color information acquired by the camera is less, the obtained face image can present a gray image, and therefore, the display effect is not good, and a problem of how to improve the display effect of the face image in the dark vision environment needs to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a related product, which can improve the display effect of a face image in a dark vision environment.
In a first aspect, an embodiment of the present invention provides a mobile terminal, including an Application Processor (AP), and a face recognition device connected to the AP, where,
the face recognition device is used for acquiring a face image in a dark vision environment;
the AP is used for acquiring a color component image of a preset face template; and carrying out image fusion on the face image and the color component image to obtain a target face image.
In a second aspect, an embodiment of the present invention provides an image processing method, which is applied to a mobile terminal including an application processor AP and a face recognition device connected to the AP, and the method includes:
the face recognition device acquires a face image in a dark vision environment;
the AP acquires a color component image of a preset face template; and carrying out image fusion on the face image and the color component image to obtain a target face image.
In a third aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a face image in a dark vision environment;
acquiring a color component image of a preset face template;
and carrying out image fusion on the face image and the color component image to obtain a target face image.
In a fourth aspect, an embodiment of the present invention provides an image processing apparatus, including:
the first acquisition unit is used for acquiring a face image in a dark vision environment;
the second acquisition unit is used for acquiring a color component image of a preset face template;
and the image fusion unit is used for carrying out image fusion on the face image and the color component image to obtain a target face image.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs including instructions for some or all of the steps as described in the third aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, where the computer program is used to make a computer execute some or all of the steps described in the third aspect of the present invention.
In a seventh aspect, embodiments of the present invention provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the third aspect of embodiments of the present invention. The computer program product may be a software installation package.
The embodiment of the invention has the following beneficial effects:
it can be seen that, according to the image processing method and the related product described in the embodiments of the present invention, in a dark vision environment, a face image is obtained, a color component image of a preset face template is obtained, and the face image and the color component image are subjected to image fusion to obtain a target face image, so that after the face image is obtained, the color component image is obtained from the preset face template, and the color information of the face image acquired in the dark vision is made up for the deficiency of the color information of the face image, so that a face image with rich color information is obtained, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic diagram of an architecture of an exemplary mobile terminal according to an embodiment of the present invention;
fig. 1B is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
FIG. 1C is a schematic flow chart of an image processing method according to an embodiment of the present invention;
FIG. 1D is a diagram illustrating the effect of human face images disclosed in the embodiments of the present invention;
FIG. 2 is a flow chart of another image processing method disclosed in the embodiment of the invention;
fig. 3 is another schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 4A is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 4B is a schematic structural diagram of an image fusion unit of the image processing apparatus depicted in FIG. 4A according to an embodiment of the present invention;
FIG. 4C is a schematic structural diagram of an image fusion module of the image fusion unit depicted in FIG. 4B according to an embodiment of the present invention;
FIG. 4D is a schematic diagram of another structure of an image processing apparatus according to an embodiment of the present invention;
FIG. 4E is a schematic diagram of another structure of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The Mobile terminal according to the embodiment of the present invention may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
The following describes embodiments of the present invention in detail. As an example mobile terminal 1000 shown in fig. 1A, the face recognition device of the mobile terminal 1000 may be a camera module 21, the camera module may be two cameras, one of which may be a visible camera, one of which may be an infrared camera, or both of which may be visible cameras, for example, one of which may be a visible camera and the other of which may be an infrared camera, for example, one of which may be a visible camera and the other of which may also be a visible camera, or the camera module may be a single camera, for example, a visible camera, or an infrared camera. The camera module 21 may be a front camera or a rear camera.
Referring to fig. 1B, fig. 1B is a schematic structural diagram of a mobile terminal 100, where the mobile terminal 100 includes: the application processor AP110 and the face recognition device 120, wherein the AP110 is connected with the face recognition device 120 through a bus 150.
The mobile terminal described based on fig. 1A-1B can be used to implement the following functions:
the face recognition device 120 is configured to obtain a face image in a dark vision environment;
the AP110 is used for acquiring a color component image of a preset face template; and carrying out image fusion on the face image and the color component image to obtain a target face image.
In one possible example, in the aspect of image fusion between the face image and the color component image, the AP110 is specifically configured to:
converting the face image into a gray image;
and carrying out image fusion on the gray level image and the color component image.
In one possible example, in the aspect of image fusion of the grayscale image and the color component image, the AP110 is specifically configured to:
determining a first centroid of the grayscale image and a second centroid of the color component image;
overlapping the gray-scale image and the color component image according to the first centroid and the second centroid, wherein the first centroid and the second centroid are completely overlapped, and the size of the gray-scale image is adjusted to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, wherein the first vertical distance is the length of a vertical line segment which penetrates through a face area and passes through the first centroid in the first image, and the second vertical distance is the length of a vertical line segment which penetrates through the face area and passes through the second centroid in the color component image;
synthesizing the first image with the color component image.
In one possible example, the AP110 is further specifically configured to further include:
determining a face angle corresponding to the face image;
and selecting the preset face template corresponding to the face angle from a preset face template library, and executing the step of acquiring the color component image of the preset face template.
In one possible example, the AP110 is further specifically configured to:
and matching the face image with the preset face template, and executing the acquisition of the color component image of the preset face template when the face image is successfully matched with the preset face template.
The mobile terminal described with reference to fig. 1A-1B may be configured to perform an image processing method described as follows:
the face recognition device 120 obtains a face image in a dark vision environment;
the AP110 acquires a color component image of a preset face template; and carrying out image fusion on the face image and the color component image to obtain a target face image.
It can be seen that, according to the image processing method described in the embodiment of the present invention, in a dark vision environment, a face image is obtained, a color component image of a preset face template is obtained, and the face image and the color component image are subjected to image fusion to obtain a target face image, so that, after the face image is obtained, the color component image can be obtained from the preset face template, and the color information of the face image collected in the dark vision is made up for the deficiency of the color information of the face image by using the color of the template, so that the face image with rich color information is obtained, and the user experience is improved.
Referring to fig. 1C, a flowchart of an embodiment of an image processing method according to an embodiment of the present invention is shown in the mobile terminal described in fig. 1A to fig. 1B. The image processing method described in the present embodiment may include the steps of:
101. and acquiring a face image under a dark vision environment.
The face image can be obtained by focusing the face, and the face image can be an image containing the face, or only a sectional image of the face. The above-mentioned scotopic visual environment can be detected by an ambient light sensor.
Before the step 101, the following steps may be included:
a1, obtaining target environment parameters;
a2, determining target shooting parameters corresponding to the target environment parameters;
then, in step 101, the face image is obtained, which can be implemented as follows:
and shooting a human face according to the target shooting parameters to obtain the human face image.
The target environmental parameter may be detected by an environmental sensor, the environmental sensor may be configured to detect an environmental parameter, and the environmental sensor may be at least one of: breath detection sensors, ambient light sensors, electromagnetic detection sensors, ambient color temperature detection sensors, positioning sensors, temperature sensors, humidity sensors, and the like, the environmental parameter may be at least one of: breathing parameters, ambient brightness, ambient color temperature, ambient magnetic field interference factor, weather conditions, number of ambient light sources, geographic location, etc., the breathing parameters may be at least one of: number of breaths, breathing rate, breathing sounds, breathing curves, etc.
Further, the mobile terminal may store a corresponding relationship between the shooting parameters and the environment parameters in advance, and further determine target shooting parameters corresponding to the target environment parameters according to the corresponding relationship, where the shooting parameters may include but are not limited to: focal length, exposure time, aperture size, photographing mode, sensitivity ISO, white balance parameters, and the like. In this way, an image optimal in the environment can be obtained.
Optionally, before performing step 101, the following steps may be further included:
and acquiring the current environment brightness, and when the current environment brightness is lower than a preset brightness threshold value, determining that the current environment is a dark vision environment.
The preset brightness threshold value can be set by the user or defaulted by the system. When the current ambient brightness is lower than the preset brightness threshold, the current environment can be considered as the dark vision environment.
102. And acquiring a color component image of a preset face template.
The preset face template can be stored in a memory of the mobile terminal in advance. The color space conversion of the preset face template may be performed, for example, to the YUV color space, or the HIS color space, and then the color component image is extracted.
Optionally, between the step 101 and the step 102, the following steps may be further included:
b1, determining a face angle corresponding to the face image;
b2, selecting the preset face template corresponding to the face angle from a preset face template library, and executing the step of obtaining the color component image of the preset face template.
In the face recognition process, the face may also have different face angles, for example, a front face and a side face, the face angles of which are different, and therefore, each face image may correspond to one face angle, of course, a preset face template library may also be stored in the mobile terminal, the preset face template library may include a plurality of preset face templates, each preset face template may correspond to one face angle, and then the preset face template corresponding to the face angle may be selected from the preset face template library, and then the color component images of the preset face template corresponding to the face angle of the face image may be obtained, so that the color component images may be better fused in the face image in this way to make up for the deficiency of the color information of the face image.
Further optionally, between the above steps B2 and 102, the following steps may also be included:
and matching the face image with the preset face template, and executing the acquisition of the color component image of the preset face template when the face image is successfully matched with the preset face template.
The face image may be matched with the preset face template, if the matching fails, step 102 may not be executed, and if the matching succeeds, step 102 may be executed.
103. And carrying out image fusion on the face image and the color component image to obtain a target face image.
The human face image is lack of color information, and the color component image contains more color information, so that the human face image and the color component image are subjected to image fusion to obtain a target human face image, the target human face image can show more color information, the target human face image can be displayed on a display screen of the mobile terminal to present a color human face image, and user experience is improved.
Optionally, in the step 103, the image fusion of the face image and the color component image may include the following steps:
31. converting the face image into a gray image;
32. and carrying out image fusion on the gray level image and the color component image.
Although the face image lacks color information, the face image also contains a part of color information, and if a color component image is directly fused on the face image, the color of the face image is not uniform, namely the face skin color is distorted.
Further optionally, the step 32 of image fusing the grayscale image and the color component image may include the following steps:
321. determining a first centroid of the grayscale image and a second centroid of the color component image;
322. overlapping the gray-scale image and the color component image according to the first centroid and the second centroid, wherein the first centroid and the second centroid are completely overlapped, and the size of the gray-scale image is adjusted to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, wherein the first vertical distance is the length of a vertical line segment which penetrates through a face area and passes through the first centroid in the first image, and the second vertical distance is the length of a vertical line segment which penetrates through the face area and passes through the second centroid in the color component image;
323. synthesizing the first image with the color component image.
Wherein the center of mass is simply referred to as the centroid, which refers to an imaginary point on the matter system where the mass is considered to be concentrated. Of course, an image may also contain a centroid, and an image may have only one centroid. In the embodiment of the present invention, a first centroid of a grayscale image and a second centroid of a color component image can be obtained in a geometric manner, and then the grayscale image and the color component image are overlapped according to the first centroid and the second centroid, the first centroid and the second centroid are completely overlapped, and a size of the grayscale image is adjusted to obtain the first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, where the first vertical distance is a length of a vertical line segment penetrating through a face region and passing through the first centroid in the first image, the second vertical distance is a length of a vertical line segment penetrating through the face region and passing through the second centroid in the color component image, and the size adjustment may include an amplification process or a reduction process. For example, as shown in FIG. 1D, where the center of mass is shown, along with the first vertical distance. Further, two components, a luminance component and a color component of a color image are obtained, that is, the first image is the luminance component and the color component image is the color component, and further, the two components may be synthesized, for example, the two pixels are overlapped to be displayed, so as to obtain a target face image, or the synthesized image is converted into an RGB color space, so as to obtain a target face image, and the target face image can be displayed on a display screen of the mobile terminal.
Optionally, between the step 322 and the step 323, the following steps may be further included:
performing interpolation processing on the first image;
then, in the above step 323, the first image and the color component image are combined, which may be implemented as follows:
and synthesizing the first image subjected to interpolation processing and the color component image.
Since the first image is adjusted to a certain degree, the first image can be interpolated to make the transition between pixels in the image natural, and the interpolation can be at least one of the following: linear interpolation, quadratic interpolation, bilinear interpolation, or non-linear interpolation, etc.
Optionally, between the above steps 31 and 32, the following steps may be further included:
performing image enhancement processing on the gray level image;
then, in the above step 32, the grayscale image and the color component image are image-fused, which may be implemented as follows:
and carrying out image fusion on the gray level image subjected to image enhancement processing and the color component image.
The image enhancement process may include, but is not limited to: image denoising (e.g., wavelet transform for image denoising), image restoration (e.g., wiener filtering), dark vision enhancement algorithms (e.g., histogram equalization, gray scale stretching, etc.), and after the image enhancement processing is performed on the gray scale image, the quality of the gray scale image can be improved to some extent.
Optionally, before the step of performing image enhancement processing on the grayscale image, the method may further include the following step:
and performing image quality evaluation on the gray level image to obtain an image quality evaluation value, and performing image enhancement processing on the gray level image when the image quality evaluation value is lower than a preset quality threshold.
The preset quality threshold value can be set by a user or defaulted by a system, the image quality of the gray-scale image can be evaluated firstly to obtain an image quality evaluation value, whether the quality of the gray-scale image is good or bad is judged through the image quality evaluation value, when the image quality evaluation value is larger than or equal to the preset quality threshold value, the gray-scale image quality can be considered to be good, and when the image quality evaluation value is smaller than the preset quality threshold value, the gray-scale image quality can be considered to be poor, and further, the image enhancement processing can be carried out on the gray-scale image.
The above-mentioned image quality evaluation of the grayscale image can be implemented as follows;
and performing image quality evaluation on the gray level image by using at least one image quality evaluation index, thereby obtaining an image quality evaluation value.
In the specific matching, when the grayscale image is evaluated, a plurality of image quality evaluation indexes are included, and each image quality evaluation index also corresponds to one weight, so that when each image quality evaluation index evaluates the image quality, an evaluation result can be obtained, and finally, a weighting operation is performed, so that a final image quality evaluation value is obtained. The image quality evaluation index may include, but is not limited to: mean, standard deviation, entropy, sharpness, signal-to-noise ratio, etc.
It should be noted that, since there is a certain limitation in evaluating image quality by using a single evaluation index, it is possible to evaluate image quality by using a plurality of image quality evaluation indexes, and certainly, when evaluating image quality, it is not preferable that the image quality evaluation indexes are more, because the image quality evaluation indexes are more, the calculation complexity of the image quality evaluation process is higher, and the image quality evaluation effect is not better, and therefore, in a case where the image quality evaluation requirement is higher, it is possible to evaluate image quality by using 2 to 10 image quality evaluation indexes. Specifically, the number of image quality evaluation indexes and which index is selected is determined according to the specific implementation situation. Of course, the image quality evaluation index selected in combination with the specific scene selection image quality evaluation index may be different between the image quality evaluation performed in the dark environment and the image quality evaluation performed in the bright environment.
Alternatively, in the case where the requirement on the accuracy of the image quality evaluation is not high, the evaluation may be performed by using one image quality evaluation index, for example, the image quality evaluation value may be performed on the image to be processed by using entropy, and it may be considered that the larger the entropy, the better the image quality is, and conversely, the smaller the entropy, the worse the image quality is.
Alternatively, when the requirement on the image quality evaluation accuracy is high, the image may be evaluated by using a plurality of image quality evaluation indexes, and when the image quality evaluation is performed by using a plurality of image quality evaluation indexes, a weight of each of the plurality of image quality evaluation indexes may be set, so that a plurality of image quality evaluation values may be obtained, and a final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and their corresponding weights, for example, three image quality evaluation indexes are: when an image quality evaluation is performed on a certain image by using A, B and C, the image quality evaluation value corresponding to a is B1, the image quality evaluation value corresponding to B is B2, and the image quality evaluation value corresponding to C is B3, the final image quality evaluation value is a1B1+ a2B2+ a3B 3. In general, the larger the image quality evaluation value, the better the image quality.
Optionally, after the step 103, the following steps may be further included:
and matching the target face image with the preset face template, and performing unlocking operation when the target face image is successfully matched with the preset face template.
When the matching of the target face image and the preset face template is successful, unlocking operation can be carried out, and when the matching of the target face image and the preset face template is failed, the user can be prompted to carry out face recognition again. The unlocking operation may be at least one of the following conditions: for example, when the mobile terminal is in a screen-off state, the unlocking operation may be to light up a screen and enter a main page of the mobile terminal, or to designate a page; when the mobile terminal is in a bright screen state, the unlocking operation can be entering a main page of the mobile terminal or an appointed page; for example, the mobile terminal may be on a payment page, and the unlocking operation may be to perform payment. The designated page may be at least one of: a page of an application, or a page specified by the user.
It can be seen that, according to the image processing method described in the embodiment of the present invention, in a dark vision environment, a face image is obtained, a color component image of a preset face template is obtained, and the face image and the color component image are subjected to image fusion to obtain a target face image, so that, after the face image is obtained, the color component image can be obtained from the preset face template, and the color information of the face image collected in the dark vision is made up for the deficiency of the color information of the face image by using the color of the template, so that the face image with rich color information is obtained, and the user experience is improved.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of an image processing method according to an embodiment of the present invention. The image processing method described in the present embodiment may include the steps of:
201. and acquiring a face image under a dark vision environment.
202. And matching the face image with a preset face template.
When the face image is successfully matched with the preset face template, the face image can be considered to be from the owner, the follow-up steps can be executed, and then the target face image is displayed on the display screen of the mobile terminal.
Optionally, in the step 202, matching the face image with a preset face template may include the following steps:
21. selecting a target area with definition meeting a preset requirement in the face image, and extracting feature points of the target area to obtain a first feature point set;
22. extracting the peripheral outline of the face image to obtain a first outline;
23. matching the first contour with a second contour of the preset face template, and matching the first feature point set with the preset face template;
24. when the first contour is successfully matched with the second contour of the preset face template and the first feature point set is successfully matched with the preset face template, the matching is confirmed to be successful; and confirming that the matching fails when the first contour fails to be matched with the second contour of the preset face template, or when the first feature point set fails to be matched with the preset face template.
In the embodiment of the invention, a target area can be selected from a face image, and if the target area is complete, the collected features are beneficial to improving the face recognition efficiency, on the other hand, because the target area is only a partial area and accidental matching is possible, or the recognition area is less, the face image is subjected to contour extraction to obtain a first contour, in the matching process, the feature points of the target area are matched with a preset face template, meanwhile, the first contour is matched with the preset face template, and when the first contour and the preset face template are matched, the matching is successful only when the first contour and the preset face template are matched, if any one of the two is failed, the matching is failed, so that the matching speed and the safety are ensured while the success rate is ensured.
Optionally, the definition may also be defined by the number of feature points, after all, the clearer the image, the more feature points it contains, and then the preset requirement is: if the number of feature points is greater than a preset number threshold, which may be set by the user or default by the system, step 21 may be implemented as follows: and determining the area with the number of the feature points larger than a preset number threshold value in the face image as the target area.
Optionally, the definition may be calculated by a specific formula, which is introduced in the related art and is not described herein, and then the preset requirement is: the sharpness value is greater than a preset sharpness threshold, which may be set by the user or default by the system, then step 21 may be implemented as follows: and determining the region with the definition value larger than a preset definition threshold value in the face image as the target region.
In addition, the above feature extraction can be implemented by the following algorithm: harris corner detection algorithm, scale invariant feature transformation, SUSAN corner detection algorithm, etc., which are not described herein again. The contour extraction in step 22 may be the following algorithm: hough transform, haar or canny, etc.
203. And when the face image is successfully matched with the preset face template, acquiring a color component image of the preset face template.
204. And carrying out image fusion on the face image and the color component image to obtain a target face image.
The detailed description of the steps 201 to 204 may refer to the corresponding steps of the image processing method described in fig. 1C, and will not be repeated herein.
It can be seen that, according to the image processing method described in the embodiment of the present invention, in a dark vision environment, a face image is obtained, the face image is matched with a preset face template, if matching is successful, a color component image of the preset face template is obtained, and the face image and the color component image are subjected to image fusion to obtain a target face image, so that after the face image is obtained, the color component image is obtained from the preset face template, and the color of the template is used to make up for the shortage of color information of the face image collected in the dark vision, so that the face image with rich color information is obtained, and user experience is improved.
Referring to fig. 3, fig. 3 is a mobile terminal according to an embodiment of the present invention, including: an application processor AP and a memory; and one or more programs stored in the memory and configured for execution by the AP, the programs including instructions for performing the steps of:
acquiring a face image in a dark vision environment;
acquiring a color component image of a preset face template;
and carrying out image fusion on the face image and the color component image to obtain a target face image.
In one possible example, the image fusing the face image and the color component image, the program comprising instructions for:
converting the face image into a gray image;
and carrying out image fusion on the gray level image and the color component image.
In one possible example, the image fusing the grayscale image with the color component image, the program comprising instructions for:
determining a first centroid of the grayscale image and a second centroid of the color component image;
overlapping the gray-scale image and the color component image according to the first centroid and the second centroid, wherein the first centroid and the second centroid are completely overlapped, and the size of the gray-scale image is adjusted to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, wherein the first vertical distance is the length of a vertical line segment which penetrates through a face area and passes through the first centroid in the first image, and the second vertical distance is the length of a vertical line segment which penetrates through the face area and passes through the second centroid in the color component image;
synthesizing the first image with the color component image.
In one possible example, the program further comprises instructions for performing the steps of:
determining a face angle corresponding to the face image;
and selecting the preset face template corresponding to the face angle from a preset face template library, and executing the step of acquiring the color component image of the preset face template.
In one possible example, the program further comprises instructions for performing the steps of:
and matching the face image with the preset face template, and executing the acquisition of the color component image of the preset face template when the face image is successfully matched with the preset face template.
The following is a device for implementing the image processing method, specifically as follows:
referring to fig. 4A, fig. 4A is a schematic structural diagram of an image processing apparatus according to the present embodiment. The image processing apparatus includes a first acquisition unit 401, a second acquisition unit 402, and an image fusion unit 403, wherein,
a first obtaining unit 401, configured to obtain a face image in a dark visual environment;
a second obtaining unit 402, configured to obtain a color component image of a preset face template;
an image fusion unit 403, configured to perform image fusion on the face image and the color component image to obtain a target face image.
Alternatively, as shown in fig. 4B, fig. 4B is a detailed structure of the image fusion unit 403 of the image processing apparatus depicted in fig. 4A, and the image fusion unit 403 may include: the conversion module 4031 and the image fusion module 4032 are specifically as follows:
a conversion module 4031, configured to convert the face image into a grayscale image;
an image fusion module 4032, configured to perform image fusion on the grayscale image and the color component image.
Alternatively, as shown in fig. 4C and fig. 4C are specific detailed structures of the image fusion module 4032 of the image fusion unit 403 depicted in fig. 4B, the image fusion module 4032 may include: the determining module 501, the adjusting module 502 and the synthesizing module 503 are specifically as follows:
a determining module 501, configured to determine a first centroid of the grayscale image and a second centroid of the color component image;
an adjusting module 502, configured to overlap the grayscale image and the color component image according to the first centroid and the second centroid, where the first centroid and the second centroid are completely overlapped, and perform size adjustment on the grayscale image to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, where the first vertical distance is a length of a vertical line segment that passes through a face region and passes through the first centroid in the first image, and the second vertical distance is a length of a vertical line segment that passes through the face region and passes through the second centroid in the color component image;
a synthesizing module 503, configured to synthesize the first image and the color component image.
Alternatively, as shown in fig. 4D, fig. 4D is a modified structure of the image processing apparatus depicted in fig. 4A, which may further include, compared with fig. 4A: the determining unit 404 and the selecting unit 405 are as follows:
a determining unit 404, configured to determine a face angle corresponding to the face image;
a selecting unit 405, configured to select the preset face template corresponding to the face angle from a preset face template library, where the step of obtaining the color component image of the preset face template is executed by the second obtaining unit 502.
Alternatively, as shown in fig. 4E, fig. 4E is a modified structure of the image processing apparatus depicted in fig. 4A, which may further include, compared with fig. 4A: the matching unit 406 specifically includes the following:
a matching unit 406, configured to match the face image with the preset face template, and execute the acquiring of the color component image of the preset face template when the face image is successfully matched with the preset face template.
It can be seen that, in the image processing apparatus described in the embodiment of the present invention, in a dark vision environment, a face image is obtained, a preset face template is subjected to darkening processing, a preset face template is obtained, the face image is matched with the preset face template, and when the face image is successfully matched with the preset face template, an unlocking operation is performed, so that, in the dark vision environment, the face template can be subjected to darkening processing, the quality of the face template is reduced, and the matching value between the face template and the face image in the dark vision environment is improved, so that the face recognition efficiency can be improved.
It is to be understood that the functions of each program module of the image processing apparatus of this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 5 is a block diagram illustrating a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention. Referring to fig. 5, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, sensor 950, audio circuit 960, Wireless Fidelity (WiFi) module 970, application processor AP980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 5:
the input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch display 933, a face recognition device 931, and other input devices 932. The face recognition device 931 may refer to the above structure, and the specific structure may refer to the above description, which is not described herein in detail. The input unit 930 may also include other input devices 932. In particular, other input devices 932 may include, but are not limited to, one or more of physical keys, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Wherein, the AP980 is configured to perform the following steps:
acquiring a face image in a dark vision environment;
acquiring a color component image of a preset face template;
and carrying out image fusion on the face image and the color component image to obtain a target face image.
The AP980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions and processes of the mobile phone by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Optionally, the AP980 may include one or more processing units, which may be artificial intelligence chips, quantum chips; preferably, the AP980 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the AP 980.
Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
RF circuitry 910 may be used for the reception and transmission of information. In general, the RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an environment sensor and a proximity sensor, wherein the environment sensor may adjust brightness of the touch display screen according to brightness of ambient light, and the proximity sensor may turn off the touch display screen and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and the audio signal is converted by the speaker 961 to be played; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, and the electrical signal is received by the audio circuit 960 and converted into audio data, and the audio data is processed by the audio playing AP980, and then sent to another mobile phone via the RF circuit 910, or played to the memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 5 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, and preferably, the power supply may be logically connected to the AP980 via a power management system, so that functions such as managing charging, discharging, and power consumption may be performed via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the foregoing embodiment shown in fig. 1C or fig. 2, the method flow of each step may be implemented based on the structure of the mobile phone.
In the embodiments shown in fig. 3 and fig. 4A to fig. 4E, the functions of the units may be implemented based on the structure of the mobile phone.
Embodiments of the present invention also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute a part or all of the steps of any one of the image processing methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the image processing methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A mobile terminal comprising an application processor AP, and a face recognition device connected to the AP, wherein,
the face recognition device is used for acquiring target environment parameters, determining target shooting parameters corresponding to the target environment parameters, and shooting a face according to the target shooting parameters in a dark vision environment to acquire a face image;
the AP is used for acquiring a color component image of a preset face template; carrying out image fusion on the face image and the color component image to obtain a target face image; the image fusion of the face image and the color component image comprises: converting the face image into a gray image, performing image quality evaluation on the gray image to obtain an image quality evaluation value, performing image enhancement processing on the gray image when the image quality evaluation value is lower than a preset quality threshold, and performing image fusion on the gray image subjected to the image enhancement processing and the color component image;
in the aspect of performing image fusion on the grayscale image and the color component image, the AP is specifically configured to: determining a first centroid of the grayscale image and a second centroid of the color component image; overlapping the gray-scale image and the color component image according to the first centroid and the second centroid, wherein the first centroid and the second centroid are completely overlapped, and the size of the gray-scale image is adjusted to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, wherein the first vertical distance is the length of a vertical line segment which penetrates through a face area and passes through the first centroid in the first image, and the second vertical distance is the length of a vertical line segment which penetrates through the face area and passes through the second centroid in the color component image; synthesizing the first image with the color component image.
2. The mobile terminal of claim 1, wherein the AP is further specifically configured to further comprise:
determining a face angle corresponding to the face image;
and selecting the preset face template corresponding to the face angle from a preset face template library, and executing the step of acquiring the color component image of the preset face template.
3. The mobile terminal according to any of claims 1 to 2, wherein the AP is further configured to:
and matching the face image with the preset face template, and executing the acquisition of the color component image of the preset face template when the face image is successfully matched with the preset face template.
4. An image processing method applied to a mobile terminal comprising an Application Processor (AP) and a face recognition device connected with the AP, the method comprising:
the face recognition device acquires target environment parameters, determines target shooting parameters corresponding to the target environment parameters, and shoots a face according to the target shooting parameters in a dark visual environment to acquire a face image;
the AP acquires a color component image of a preset face template; carrying out image fusion on the face image and the color component image to obtain a target face image; the image fusion of the face image and the color component image comprises: converting the face image into a gray image, performing image quality evaluation on the gray image to obtain an image quality evaluation value, performing image enhancement processing on the gray image when the image quality evaluation value is lower than a preset quality threshold, and performing image fusion on the gray image subjected to the image enhancement processing and the color component image;
in the aspect of performing image fusion on the grayscale image and the color component image, the AP is specifically configured to: determining a first centroid of the grayscale image and a second centroid of the color component image; overlapping the gray-scale image and the color component image according to the first centroid and the second centroid, wherein the first centroid and the second centroid are completely overlapped, and the size of the gray-scale image is adjusted to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, wherein the first vertical distance is the length of a vertical line segment which penetrates through a face area and passes through the first centroid in the first image, and the second vertical distance is the length of a vertical line segment which penetrates through the face area and passes through the second centroid in the color component image; synthesizing the first image with the color component image.
5. An image processing method, comprising:
acquiring target environment parameters;
determining target shooting parameters corresponding to the target environment parameters;
shooting a human face according to the target shooting parameters in a dark vision environment to obtain a human face image;
acquiring a color component image of a preset face template;
carrying out image fusion on the face image and the color component image to obtain a target face image; the method comprises the following steps: converting the face image into a gray image, performing image quality evaluation on the gray image to obtain an image quality evaluation value, performing image enhancement processing on the gray image when the image quality evaluation value is lower than a preset quality threshold, and performing image fusion on the gray image subjected to the image enhancement processing and the color component image;
the image fusing the grayscale image and the color component image includes: determining a first centroid of the grayscale image and a second centroid of the color component image; overlapping the gray-scale image and the color component image according to the first centroid and the second centroid, wherein the first centroid and the second centroid are completely overlapped, and the size of the gray-scale image is adjusted to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, wherein the first vertical distance is the length of a vertical line segment which penetrates through a face area and passes through the first centroid in the first image, and the second vertical distance is the length of a vertical line segment which penetrates through the face area and passes through the second centroid in the color component image; synthesizing the first image with the color component image.
6. The method of claim 5, further comprising:
determining a face angle corresponding to the face image;
and selecting the preset face template corresponding to the face angle from a preset face template library, and executing the step of acquiring the color component image of the preset face template.
7. The method according to any one of claims 5 to 6, further comprising:
and matching the face image with the preset face template, and executing the acquisition of the color component image of the preset face template when the face image is successfully matched with the preset face template.
8. An image processing apparatus characterized by comprising:
the first acquisition unit is used for acquiring target environment parameters, determining target shooting parameters corresponding to the target environment parameters, and shooting a human face according to the target shooting parameters in a dark vision environment to acquire a human face image;
the second acquisition unit is used for acquiring a color component image of a preset face template;
the image fusion unit is used for carrying out image fusion on the face image and the color component image to obtain a target face image; the method comprises the following steps: converting the face image into a gray image, performing image quality evaluation on the gray image to obtain an image quality evaluation value, performing image enhancement processing on the gray image when the image quality evaluation value is lower than a preset quality threshold, and performing image fusion on the gray image subjected to the image enhancement processing and the color component image;
wherein the image fusing the grayscale image and the color component image comprises: determining a first centroid of the grayscale image and a second centroid of the color component image; overlapping the gray-scale image and the color component image according to the first centroid and the second centroid, wherein the first centroid and the second centroid are completely overlapped, and the size of the gray-scale image is adjusted to obtain a first image, so that a first vertical distance of the first image is equal to a second vertical distance of the color component image, wherein the first vertical distance is the length of a vertical line segment which penetrates through a face area and passes through the first centroid in the first image, and the second vertical distance is the length of a vertical line segment which penetrates through the face area and passes through the second centroid in the color component image; synthesizing the first image with the color component image.
9. A mobile terminal, comprising: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs comprising instructions for the method of any of claims 5-7.
10. A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method according to any one of claims 5-7.
CN201710889988.5A 2017-09-27 2017-09-27 Image processing method and related product Expired - Fee Related CN107633499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710889988.5A CN107633499B (en) 2017-09-27 2017-09-27 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710889988.5A CN107633499B (en) 2017-09-27 2017-09-27 Image processing method and related product

Publications (2)

Publication Number Publication Date
CN107633499A CN107633499A (en) 2018-01-26
CN107633499B true CN107633499B (en) 2020-09-01

Family

ID=61102727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710889988.5A Expired - Fee Related CN107633499B (en) 2017-09-27 2017-09-27 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN107633499B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110236509A (en) * 2018-03-07 2019-09-17 台北科技大学 Analyze the method for physiological characteristic in real time in video
CN109345470B (en) * 2018-09-07 2021-11-23 华南理工大学 Face image fusion method and system
CN110969046B (en) * 2018-09-28 2023-04-07 深圳云天励飞技术有限公司 Face recognition method, face recognition device and computer-readable storage medium
CN109816628B (en) * 2018-12-20 2021-09-14 深圳云天励飞技术有限公司 Face evaluation method and related product
CN110162953A (en) * 2019-05-31 2019-08-23 Oppo(重庆)智能科技有限公司 Biometric discrimination method and Related product
CN112102623A (en) * 2020-08-24 2020-12-18 深圳云天励飞技术股份有限公司 Traffic violation identification method and device and intelligent wearable device
CN112178427A (en) * 2020-09-25 2021-01-05 广西中科云创智能科技有限公司 Anti-damage structure of face recognition snapshot camera
CN113556465A (en) * 2021-06-10 2021-10-26 深圳胜力新科技有限公司 AI-based video linkage perception monitoring system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065127A (en) * 2012-12-30 2013-04-24 信帧电子技术(北京)有限公司 Method and device for recognizing human face in fog day image
CN103914820A (en) * 2014-03-31 2014-07-09 华中科技大学 Image haze removal method and system based on image layer enhancement
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065127A (en) * 2012-12-30 2013-04-24 信帧电子技术(北京)有限公司 Method and device for recognizing human face in fog day image
CN103914820A (en) * 2014-03-31 2014-07-09 华中科技大学 Image haze removal method and system based on image layer enhancement
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多尺度变换的图像融合技术算法研究;连伟龙;《中国优秀硕士学位论文全文数据库信息科技辑》;20130115;第39页第1段-第47页第1段,图5.1-5.22 *

Also Published As

Publication number Publication date
CN107633499A (en) 2018-01-26

Similar Documents

Publication Publication Date Title
CN107679482B (en) Unlocking control method and related product
CN107633499B (en) Image processing method and related product
CN107609514B (en) Face recognition method and related product
CN107862265B (en) Image processing method and related product
CN107590461B (en) Face recognition method and related product
CN107480496B (en) Unlocking control method and related product
CN107292285B (en) Iris living body detection method and related product
CN107679481B (en) Unlocking control method and related product
CN107506687B (en) Living body detection method and related product
CN107197146B (en) Image processing method and device, mobile terminal and computer readable storage medium
CN107657218B (en) Face recognition method and related product
CN107403147B (en) Iris living body detection method and related product
CN108093134B (en) Anti-interference method of electronic equipment and related product
CN107463818B (en) Unlocking control method and related product
CN107451446B (en) Unlocking control method and related product
CN107423699B (en) Biopsy method and Related product
CN106558025B (en) Picture processing method and device
US20200167581A1 (en) Anti-counterfeiting processing method and related products
CN107451454B (en) Unlocking control method and related product
CN107644219B (en) Face registration method and related product
CN107633235B (en) Unlocking control method and related product
CN107613550B (en) Unlocking control method and related product
CN107480488B (en) Unlocking control method and related product
CN107205116B (en) Image selection method, mobile terminal, image selection device, and computer-readable storage medium
CN108345848A (en) The recognition methods of user's direction of gaze and Related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200901

CF01 Termination of patent right due to non-payment of annual fee