WO2020103732A1 - 一种皱纹检测方法和终端设备 - Google Patents

一种皱纹检测方法和终端设备

Info

Publication number
WO2020103732A1
WO2020103732A1 PCT/CN2019/117904 CN2019117904W WO2020103732A1 WO 2020103732 A1 WO2020103732 A1 WO 2020103732A1 CN 2019117904 W CN2019117904 W CN 2019117904W WO 2020103732 A1 WO2020103732 A1 WO 2020103732A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal device
image
white
roi
area
Prior art date
Application number
PCT/CN2019/117904
Other languages
English (en)
French (fr)
Inventor
胡宏伟
董辰
丁欣
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19888055.1A priority Critical patent/EP3872753B1/en
Priority to US17/295,230 priority patent/US11978231B2/en
Publication of WO2020103732A1 publication Critical patent/WO2020103732A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the technical field of terminals, in particular to a wrinkle detection method and terminal equipment.
  • women As people's quality of life improves, more and more people, especially women, begin to pay attention to their skin conditions. Among them, women pay more attention to the skin condition of the face, such as whether there are crow's feet in the corners of the eyes, whether there are decrees in the face, etc., and will choose to use different maintenance products according to these skin conditions.
  • Embodiments of the present application provide a wrinkle detection method and a terminal device, which are used to automatically detect wrinkles on a user's skin, facilitate user operations, and improve user experience.
  • an embodiment of the present application provides a wrinkle detection method, which can be executed by a terminal device.
  • the method includes: the terminal device obtains an original image including a human face; the terminal device adjusts the size of the ROI area on the original image to obtain at least two ROI images of different sizes; wherein, the ROI The area is the area where the wrinkles on the human face are located; the terminal device processes each of the at least two ROI images of different sizes to obtain at least two black and white images; wherein, the white in each black and white image The area is an area where suspicious wrinkles appear; the terminal device fuses the at least two black and white images to obtain a final image; the white area on the final image is recognized as a wrinkle.
  • terminal devices such as mobile phones and ipads
  • the terminal device only needs to collect the image including the human face, and can use the above-mentioned wrinkle detection method to detect wrinkles, which is convenient for operation and improves the user experience, and performs at least two different processing on the ROI area on an image (including the human face) Improve the accuracy of wrinkle detection.
  • the terminal device processes each of the at least two ROI images of different sizes to obtain at least two black-and-white images, including: for each ROI image, use a preset Repeat at least one matrix of the following steps: the terminal device overlays the ROI image with a preset matrix, and determines pixel values of pixels on the ROI image corresponding to each matrix element in the preset matrix; The terminal device determines the product of each matrix element and the pixel value of the pixel corresponding to each matrix element; the terminal device sums the products corresponding to each matrix element; the sum is the matrix The pixel value of the center position of the image block covered on the ROI image; if the pixel value of the center position of the image block is greater than a preset pixel value, the terminal device sets the center position to black, if the The pixel value at the center position of the image block is less than or equal to the preset pixel value, and the terminal device sets the center position to white.
  • the terminal device uses a preset matrix to process each ROI image, determine the pixel value of the center position of different image blocks on each ROI image, and set the center position with a higher pixel value to black, Set the center of the pixel value to be white.
  • the white area in the black and white image obtained by the terminal device is the suspicious appearance area of wrinkles. In this way, the accuracy of wrinkle detection is improved.
  • the terminal device before the terminal device fuses the at least two black and white images to obtain a final image, the terminal device also determines that M of the at least two black and white images are at the same position If there is a white area, delete the white area at the position in the M images; where M is less than or equal to a preset value.
  • the terminal device may also delete some white areas that meet the conditions (for example, only one black and white image has a white area at a certain position, and this on other black and white images If there is no white area at the location, delete the white area). In this way, the accuracy of wrinkle detection is improved.
  • the terminal device determines the beard area on the final image; the terminal device determines and N white areas intersecting the area where the beard is located; the terminal device determines that the number of pixels in the n white area where the first white area is located in the area where the beard is located and all of the first white area The ratio of the number of pixels; if the ratio is greater than or equal to the preset ratio, the terminal device deletes the first white area from the final image, and the remaining white area on the final image is recognized as a decree line.
  • the terminal device after the terminal device recognizes the wrinkle, it may further recognize the decree line. Therefore, the terminal device screens the decree lines from the recognized wrinkles (for example, the white area that may be the decree lines is screened according to the area where the beard is located). In this way, the accuracy of the identification decree pattern is improved.
  • the terminal device determines the coordinate position of the nose in the final image; the terminal device A white area within a preset distance range from the coordinate position and having a length greater than a preset length is deleted from the final image, and the remaining white area on the final image is recognized as a decree pattern.
  • the terminal device may also determine a white area that may be a decree line according to the position of the nose wings, so as to improve the accuracy of detecting the decree line.
  • the preset ratio is 1-n / m, where m is a preset fixed value.
  • the terminal device converts the ROI image into a grayscale image; the terminal The device horizontally adjusts the grayscale image; the terminal device performs desiccation processing on the horizontally adjusted image.
  • the terminal device may also preprocess the ROI image.
  • the preprocessing process includes grayscale processing, horizontal adjustment, and desiccation processing. Through preprocessing, the wrinkle detection is improved. accuracy.
  • the terminal device determines the evaluation result y of the white area based on the following formula
  • x1 represents the average width of the white area
  • x2 represents the average length of the white area
  • x3 represents the average internal and external color contrast of the white area
  • x4 represents the number of pixels of the white area in the ROI image The ratio of the total number of pixels
  • x5 and x6 respectively represent the length and width of the longest white area in the white area
  • b represents the offset.
  • the terminal device after detecting wrinkles, the terminal device can evaluate the wrinkles. Through the above formula, the evaluation result of wrinkles can be determined more accurately.
  • the terminal device before the terminal device acquires the original image, the terminal device detects the first operation, runs the first application, turns on the camera, and displays the viewfinder interface; the terminal device recognizes the final image After the wrinkle, prompt information is displayed in the viewfinder interface, and the prompt information is used to prompt the position of the wrinkle in the human face.
  • the terminal device may integrate the wrinkle detection function in the first application, and the first application may be an application that comes with the terminal device such as a camera application or an application for detecting skin alone, or may be a terminal device An application downloaded from the network side during use. After the terminal device recognizes the wrinkle, it can prompt the user of the position of the wrinkle in the face. In this way, people, especially women, manage their skins to facilitate operation and improve user experience.
  • the terminal device before the terminal device acquires the original image, the terminal device is in a screen lock state; after the terminal device recognizes wrinkles in the final image, the terminal device stores the wrinkles and pre-stores Comparison of wrinkles in the image; if consistent, the terminal device unlocks the screen.
  • the wrinkle detection function can be applied in the field of face unlocking. After the terminal device collects the image and then recognizes the wrinkles in the image, the wrinkles in the image that can be predicted are compared, and if they match, the device is unlocked. In this way, equipment security is improved.
  • the terminal device displays a payment verification interface
  • the terminal device After the terminal device recognizes wrinkles in the final image, the terminal device compares the wrinkles with the wrinkles in the pre-stored image; if they are consistent, the terminal device performs the payment process.
  • the wrinkle detection function can be applied in the field of face payment. After the terminal device collects the image and then recognizes the wrinkles in the image, the wrinkles in the image that can be predicted are compared, and if they match, the payment process is executed. In this way, payment security is improved.
  • a prompt message is output to prompt the user that no wrinkles are detected.
  • the terminal device when the terminal device does not recognize wrinkles, it may also prompt the user that wrinkles are not recognized. In this way, people, especially women, manage their skins to facilitate operation and improve user experience.
  • an embodiment of the present application further provides a terminal device.
  • the terminal device includes a camera, a processor, and a memory; the camera: is used to acquire an original image, and the original image includes a human face; the memory is used to store one or more computer programs; when the memory stores one When the multiple computer programs are executed by the processor, the terminal device can implement the first aspect and any possible technical solution of the first aspect.
  • an embodiment of the present application further provides a terminal device, the terminal device includes a module / unit that executes the method of the first aspect or any one of the possible designs of the first aspect; these modules / units may be implemented by hardware It can also be implemented by hardware executing corresponding software.
  • a chip according to an embodiment of the present application the chip is coupled to a memory in an electronic device, and executes the first aspect of the embodiment of the present application and the technical solution of any possible design of the first aspect; "Coupled” means that two components are directly or indirectly joined to each other.
  • a computer-readable storage medium of an embodiment of the present application includes a computer program, and when the computer program runs on a terminal device, the terminal device executes the first embodiment of the present application Aspect and any possible technical solution of the first aspect.
  • a computer program product in an embodiment of the present application includes instructions that, when the computer program product runs on a terminal device, causes the terminal device to perform the first aspect and the first aspect of the embodiment of the present application Any possible technical solution.
  • FIG. 1 is a schematic diagram of a mobile phone 100 provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of a mobile phone 100 provided by an embodiment of this application.
  • FIG. 3 is a schematic flowchart of a wrinkle detection method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a wrinkle detection method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a decree detection area on an original image provided by an embodiment of this application.
  • FIG. 6 is a schematic diagram of an ROI image segmented from an original image provided by an embodiment of this application.
  • FIG. 7 is a schematic flowchart of a pre-processor for an ROI image provided by an embodiment of this application.
  • FIG. 8 is a schematic flowchart of resizing an ROI image provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an image block of an ROI image covered by a matrix provided by an embodiment of the present application.
  • 10 is a schematic diagram of a 15 * 15 matrix provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of a final image provided by an embodiment of this application.
  • FIG. 12 is a schematic diagram of a process of filtering out a legal pattern from a final image provided by an embodiment of the present application.
  • the applications (applications) involved in the embodiments of the present application are computer programs capable of realizing a specific function or multiple specific functions.
  • multiple applications can be installed in the terminal device.
  • camera applications For example, camera applications, SMS applications, MMS applications, various mailbox applications, chat software applications, WhatsApp Messenger, Line (Me), photo sharing (instagram), KakaoTalk, Nail application, etc.
  • the application mentioned below may be an application that comes with the terminal when it leaves the factory, or it may be an application that the user downloads from the network side during the use of the terminal.
  • the wrinkle detection function provided by the embodiment of the present application may be integrated in one or more applications, such as a camera application or a WeChat application. Taking the camera application as an example, the terminal device starts the camera application and displays a viewfinder interface.
  • the viewfinder interface may include a control.
  • the terminal device may start the wrinkle detection function provided by the embodiment of the present application.
  • the terminal device displays the WeChat expression package creation interface
  • a control is displayed in the expression package creation interface.
  • the terminal device may also start the wrinkle detection function provided by the embodiment of the present application.
  • a pixel can correspond to a coordinate point on the image.
  • a pixel can include one parameter (such as grayscale) or a collection of multiple parameters (such as grayscale, brightness, color, etc.). If the pixel includes a parameter, the pixel value is the value of the parameter. If the pixel is a set of multiple parameters, the pixel value includes the value of each parameter in the set.
  • the original image involved in the embodiment of the present application is that the lens group in the camera collects the optical signal reflected by the object to be photographed (such as a human face), and then generates an image of the object to be photographed according to the optical signal. That is, the original image is an image including a human face, but the image has not been processed.
  • the region of interest (ROI) involved in the embodiments of the present application is a partial region determined by the terminal device from the original image, and this partial region is the region where the wrinkles are located, and is called the ROI region.
  • the terminal device determines the area where the legal pattern in the original image is located, and the area where the legal pattern is located is the ROI area.
  • the above-mentioned ROI area is a partial area on the original image, but the ROI image is an image (an image composed of an ROI area) in which the terminal device divides the ROI area from the original image.
  • the embodiment of the present application does not limit the name of the area where the wrinkle is located.
  • the area where the wrinkle is located may be called other names besides the ROI area.
  • the name of the ROI image is not limited.
  • the original image or the ROI image may be used as an input image of the wrinkle detection algorithm provided by the embodiment of the present application.
  • the wrinkle detection algorithm provided by the embodiment of the present application detects wrinkles in the input image.
  • Multiple involved in the embodiments of the present application means greater than or equal to two.
  • the terminal device may be a portable device, such as a mobile phone, a tablet computer, a wearable device with a wireless communication function (such as a smart watch), and so on.
  • the specific image acquisition function and algorithm calculation capability of the portable terminal can run the wrinkle detection algorithm provided by the embodiments of the present application).
  • Exemplary embodiments of portable devices include but are not limited to piggybacking Or other operating system portable devices.
  • the above portable device may also be other portable devices, as long as they can realize the image acquisition function and algorithm calculation capability (be able to run the wrinkle detection algorithm provided by the embodiment of the present application).
  • the above terminal device may not be a portable device, but a desktop capable of realizing image acquisition functions and algorithm calculation capabilities (can run the wrinkle detection algorithm provided by the embodiments of the present application) computer.
  • the terminal device may also have an arithmetic operation capability (be able to run the wrinkle detection algorithm provided by the embodiments of the present application) and a communication function without having an image acquisition function.
  • the terminal device receives an image sent by another device, and then runs the wrinkle detection algorithm provided in the embodiment of the present application to detect wrinkles in the image.
  • the terminal device itself has an image acquisition function and an arithmetic operation function as an example.
  • FIG. 1 shows a schematic structural diagram of a mobile phone 100.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 151, wireless communication module 152, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, display screen 194, and user Identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the sensor module 180 may include a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, and a touch sensor 180K (of course, the mobile phone 100 may also include other sensors, such as a pressure sensor, an acceleration sensor, a gyro sensor, and ambient light Sensors, bone conduction sensors, etc., not shown).
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and image signals.
  • processor (image) signal processor (ISP) ISP
  • controller memory
  • video codec video codec
  • DSP digital signal processor
  • baseband processor baseband processor
  • NPU neural-network processing unit
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100. The controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Avoid repeated access, reduce the waiting time of the processor 110, thus improving the efficiency of the system.
  • the processor 110 may run the wrinkle detection algorithm provided by the embodiment of the present application to detect wrinkles on the image.
  • the mobile phone 100 realizes a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode or an active matrix organic light-emitting diode (active-matrix organic light) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the mobile phone 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the camera 193 (front camera or rear camera) is used to capture still images or video.
  • the camera 193 may include a photosensitive element such as a lens group and an image sensor, where the lens group includes a plurality of lenses (convex lens or concave lens) for collecting light signals reflected by an object to be photographed (such as a human face), and collecting the collected light
  • the signal is passed to the image sensor.
  • the image sensor generates an image of the object to be photographed (such as a face image) according to the light signal.
  • the face image may be sent to the processor 110, and the processor 110 runs the wrinkle detection algorithm provided in the embodiment of the present application to detect the wrinkle on the face image.
  • the display screen 194 may display the prompt information of the wrinkles.
  • the prompt information is used to prompt the user that the wrinkles exist, or to prompt the user where the wrinkles are located, and so on.
  • the camera 193 shown in FIG. 1 may include 1-N cameras. If one camera is included (or includes multiple cameras, but only one camera is turned on at the same time), the mobile phone 100 may perform wrinkle detection on the face image collected by the camera (or the camera turned on at the current time). If multiple cameras are included and the multiple cameras are turned on at the same time, the mobile phone 100 can perform wrinkle detection on the face image collected by each camera (the turned on camera).
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the mobile phone 100.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area may store codes of the operating system and application programs (such as camera applications, WeChat applications, etc.).
  • the storage data area may store data created during the use of the mobile phone 100 (such as images and videos collected by the camera application).
  • the internal memory 121 may also store the code of the wrinkle detection algorithm provided by the embodiment of the present application. When the code of the wrinkle detection algorithm stored in the internal memory 121 is executed by the processor 110, the wrinkle detection function is realized.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • a non-volatile memory such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the code of the wrinkle detection algorithm provided by the embodiment of the present application may also be stored in an external memory.
  • the processor 110 may run the code of the wrinkle detection algorithm stored in the external memory through the external memory interface 120 to implement the corresponding wrinkle detection function.
  • the functions of the sensor module 180 are described below.
  • the distance sensor 180F is used to measure the distance.
  • the mobile phone 100 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the mobile phone 100 may use the distance sensor 180F to measure distance to achieve fast focusing. In other embodiments, the mobile phone 100 can also use the distance sensor 180F to detect whether a person or an object is close.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone 100 emits infrared light outward through a light emitting diode.
  • the mobile phone 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the mobile phone 100. When insufficient reflected light is detected, the mobile phone 100 can determine that there is no object near the mobile phone 100.
  • the mobile phone 100 can use the proximity light sensor 180G to detect that the user is holding the mobile phone 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to unlock a fingerprint, access an application lock, take a photo with a fingerprint, and answer a call with a fingerprint.
  • the temperature sensor 180J is used to detect the temperature.
  • the mobile phone 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the location where the display screen 194 is located.
  • the wireless communication function of the mobile phone 100 can be realized through the antenna 1, the antenna 2, the mobile communication module 151, the wireless communication module 152, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the terminal device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 151 may provide a wireless communication solution including 2G / 3G / 4G / 5G and the like applied to the terminal device 100.
  • the mobile communication module 151 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and so on.
  • the mobile communication module 151 can receive electromagnetic waves from the antenna 1 and filter, amplify, etc. the received electromagnetic waves, and transmit them to a modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 151 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 151 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 152 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the terminal device 100 System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 152 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 152 receives electromagnetic waves via the antenna 2, frequency-modulates and filters electromagnetic wave signals, and transmits the processed signals to the processor 110.
  • the wireless communication module 152 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic waves through the antenna 2 to radiate it out.
  • the mobile phone 100 may receive the face image sent by other devices through the wireless communication module 151 or the wireless communication module 152, and then run the wrinkle detection algorithm provided in the embodiment of the present application to detect wrinkles in the face image.
  • the mobile phone 100 itself may not have an image acquisition function.
  • the mobile phone 100 can realize audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the mobile phone 100 may receive key 190 input and generate key signal input related to user settings and function control of the mobile phone 100.
  • the mobile phone 100 can use the motor 191 to generate vibration prompts (such as incoming call vibration prompts).
  • the indicator 192 in the mobile phone 100 can be an indicator light, which can be used to indicate the charging state, the power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 in the mobile phone 100 is used to connect a SIM card. The SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than those shown in FIG. 1.
  • the detection of wrinkles by the mobile phone 100 shown in FIG. 2 may be the following process:
  • the display screen 194 of the mobile phone 100 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, a WeChat application, etc.).
  • the lens group 193-1 in the camera 193 collects the light signal reflected by the object to be photographed (such as a human face), and transmits the collected light signal to the image sensor 193-2.
  • the image sensor 193-2 generates an original image of the object to be photographed (including the human face in the original image) according to the light signal.
  • the image sensor 193-2 sends the original image to the application processor 110-1.
  • the application processor 110-1 runs the code of the wrinkle detection algorithm provided in the embodiment of the present application (for example, the application processor 110-1 runs the code of the wrinkle detection algorithm stored in the internal memory 121) to detect wrinkles in the original image.
  • the application processor 110-1 can output prompt information (such as displaying text information in the viewfinder interface), and the prompt information can be used to mark the location of the wrinkles on the original image.
  • prompt information such as displaying text information in the viewfinder interface
  • the prompt information can be used to mark the location of the wrinkles on the original image.
  • the application processor 110-1 may also output a prompt message to prompt the user that no wrinkles are detected.
  • the wrinkle detection method provided in the embodiments of the present application may be applicable to the detection of wrinkles (such as facial decree lines, crow's feet, etc.) on any body part (such as the face).
  • wrinkles such as facial decree lines, crow's feet, etc.
  • body part such as the face
  • FIG. 3 is a flow of wrinkle detection for the code of the application processor 110-1 running the wrinkle detection algorithm.
  • the application processor 110-1 divides the decree detection area from the original image (including a human face) to obtain an ROI image.
  • the application processor 110-1 adjusts the size of the ROI image to obtain at least two ROI images of different scales (taking 3 images in FIG. 3 as an example).
  • the application processor 110-1 processes each ROI image of different scales according to a linear operator detection algorithm to obtain three black and white images, and each black and white image includes stripes.
  • the application processor 110-1 fuses three black and white images to obtain a final image, and the final image includes at least one stripe.
  • the application processor 110-1 selects one or more stripes from the final image, and the one or more stripes are the legal stripes.
  • the wrinkle detection method does not require a cumbersome operation process, and only needs to take an image including a human face to detect wrinkles, which is convenient for operation; and the wrinkle detection function can be integrated in mobile phones, ipads, etc. It is universal on portable terminal equipment.
  • FIG. 4 is a schematic flowchart of a wrinkle detection method provided by an embodiment of the present application. As shown in Figure 4, the process includes:
  • the application processor 110-1 determines the ROI area on the original image, and the ROI area is a decree detection area.
  • S301 can be implemented through the following steps:
  • the application processor 110-1 determines the key points on the original image according to the key point detection algorithm; the key points are used to indicate the points of the characteristic parts of the face in the original image, and the characteristic parts include the eyes and eyebrows of the face , Nose, mouth, face contour, etc.
  • the key point detection algorithm may be a face key point detection algorithm based on deep learning or other algorithms, which is not limited in the embodiments of the present application.
  • FIG. 5 is a schematic diagram of key points on an original image provided by an embodiment of the present application. As shown in (a) of FIG. 5, multiple key points on the original image (the application processor 110-1 may label each key point) are distributed in the feature parts (eyes, eyebrows, nose, mouth, face contour) Office.
  • the feature parts eyes, eyebrows, nose, mouth, face contour
  • the application processor 110-1 determines the decree detection area according to the key points.
  • the application processor 110-1 determines the decree detection area, namely the ROI area, as shown in FIG. 5 according to the key points 44, 48, 54, 14, 13, 128, etc. (b) The area in the oval frame.
  • the first and second steps described above introduce a possible implementation of the application processor 110-1 to determine the decree detection area on the original image.
  • the application processor 110-1 may also have other ways to determine the decree detection area on the image, which is not limited in the embodiments of the present application.
  • the application processor 110-1 divides the ROI area from the original image to obtain an ROI image, and adjusts the size of the ROI image to obtain at least two ROI images of different sizes.
  • the application processor 110-1 determines the ROI area (the area inside the ellipse frame)
  • the ROI area can be segmented out to obtain an ROI image, see FIG. 6 ( Generally, the ROI image in FIG. 6 is in color).
  • the application processor 110-1 only needs to process the segmented area, and does not need to process the entire image, which can save the calculation amount.
  • the application processor 110-1 After the application processor 110-1 separates the ROI area to obtain an ROI image, it can zoom the ROI image to obtain at least two ROI images of different sizes.
  • the application processor 110-1 may also preprocess the ROI image.
  • the process of ROI image preprocessing can be performed before or after the application processor divides the ROI image (if it is executed before the ROI image is divided, you can preprocess only the ROI area, or you can preprocess the entire image deal with).
  • the application processor 110-1 may also scale the ROI image after the application processor 110-1 to obtain at least two ROI images of different sizes, and preprocess each ROI image of different sizes.
  • the application processor 110-1 may preprocess the ROI image as follows:
  • FIG. 7 is a schematic diagram of a ROI image preprocessing process provided by an embodiment of the present application.
  • the application processor 110-1 performs gray-scale processing on the ROI image, that is, converts the ROI image from a color image to a gray-scale image (as described above, the ROI image in FIG. 6 is colored, so the first step in FIG. 7 Is to convert the color ROI image to grayscale).
  • the application processor 110-1 horizontally adjusts the ROI image.
  • the application processor 110-1 denoises the ROI image.
  • the application processor 110-1 may filter the ROI image using a prior art method such as Gaussian filtering.
  • FIG. 7 is only an example of preprocessing the ROI image by the application processor 110-1. In actual applications, in addition to the preprocessing steps shown in FIG. 7, other preprocessing steps may be included, such as filtering the ROI image.
  • the application processor 110-1 performs a linear operator filtering process on each ROI image with a different size to obtain at least two black and white images.
  • the application processor 110-1 adjusts the size of the ROI image to obtain at least two ROI images of different sizes.
  • the following uses the application processor 110-1 to adjust the size of the ROI image to obtain three ROI images of different sizes as an example.
  • the application processor 110-1 performs linear operator filtering on each of the three ROI images of different sizes to obtain three black-and-white images.
  • the following uses one ROI image among three ROI images of different sizes as an example to introduce the process of applying a linear operator filter processing to this ROI image by the application processor 110-1 to obtain a black-and-white image.
  • the image processing process is the same, and will not be repeated here.
  • FIG. 9 is a schematic diagram of performing linear operator filtering on an ROI image provided by an embodiment of the present application. Specifically, the flow of linear operator filter processing on the ROI image is as follows:
  • the application processor 110-1 "sets" the linear operator (preset matrix) on the ROI image, that is, the preset matrix "overlays” an image block on the ROI image.
  • the linear operator is in the form of a matrix, and each matrix element in the matrix can correspond to a coordinate position.
  • the matrix is a 3 * 3 matrix.
  • the matrix elements in row 1 and column 1 that is, the value is 1) correspond to coordinates (x1, y1)
  • the matrix elements in row 1 and column 2 that is, 1) correspond to coordinates (x2, y1), and so on .
  • the application processor 110-1 determines the pixel value corresponding to the position coordinate of each matrix element.
  • each matrix element in the matrix corresponds to a pixel on the image block covered by the matrix.
  • the matrix elements in row 1 and column 1 correspond to pixels with coordinates (x1, y1) on the image block; the matrix elements in row 1 and column 2 correspond to pixels with coordinates (x2, y1) on the image block, And so on. Therefore, the application processor 110-1 determines the pixel value corresponding to the matrix element in the first row and first column (assuming that the pixel value is p11), and determines the pixel value corresponding to the matrix element in the first row and second column (assuming that The pixel value is p12), and so on.
  • the application processor 110-1 sums the nine products to obtain the pixel value at the center of the image block covered by the matrix.
  • p11, p12, p13, p21, p22, p23, p31, p32, p33 are the pixel values of the pixels corresponding to each matrix element.
  • the application processor 110-1 can determine the pixel value of the center position of the image block covered by the matrix (linear operator).
  • the first step to the fourth step above are taking the matrix as a 3 * 3 matrix for example.
  • the matrix can be an n * n matrix (n is 3, 5, 7, 9, 11, 15, etc. is greater than (An odd number equal to 3).
  • Figure 10 for a schematic diagram of a 15 * 15 matrix.
  • the matrix elements of each row have the same value, for example, the first row is all 1. From the center row of the matrix (such as row 8) to other rows, the value of the matrix element increases, that is, the value of the matrix element on the center row is the smallest.
  • the above four steps introduce the process of determining the pixel value of the center position of the image block covered by the matrix when the application processor 110-1 sets the matrix at a certain position on the ROI image.
  • the matrix will overwrite another image block.
  • the application processor 110-1 may determine the pixel value of the center position of the next image block in a similar manner as described above. Therefore, the application processor 110-1 will obtain a plurality of pixel values at the center positions determined by the matrix covering different image blocks.
  • the application processor 110-1 sets the coordinate points greater than the preset pixel value (such as 300) among the pixel values of the plurality of central positions to black, and sets the coordinate points less than or equal to the preset pixel value to white. Therefore, the application processor 110-1 converts the ROI image into a black and white image.
  • the preset pixel value such as 300
  • the pixels of the center position of different image blocks on the ROI image are determined by scaling the ROI image and keeping the matrix unchanged.
  • the mobile phone 100 can store multiple matrices, 3 * 3 matrix, 5 * 5 matrix, and 15 * 15 matrix. Then use the 3 * 3 matrix, 5 * 5 matrix and 15 * 15 matrix to perform the above process (using three matrices for the same ROI image respectively).
  • the above describes the process of the black and white image obtained by the mobile phone 100 after processing one of the three ROI images of different sizes in FIG. 8.
  • a similar method is used to obtain two black and white images. image. Therefore, the application processor 110-1 obtains three black and white images in total (see FIG. 8).
  • the white area on each black-and-white image (hereinafter referred to as stripes) is the area where the decree may appear.
  • S404 The application processor 110-1 fuses at least two black and white images to obtain a final image.
  • the application processor 110-1 retains the stripes if at least two black-and-white images have stripes (M is greater than or equal to 2) of black-and-white images at the same position. If only one black-and-white image of at least two black-and-white images has stripes at a certain position, and there are no stripes at that position on other black-and-white images, the stripes are deleted. Therefore, a final image obtained by the application processor 110-1 includes at least one stripe.
  • FIG. 11 is a schematic diagram of a final image provided by an embodiment of the present application. As shown in FIG. 11, the image includes white areas (stripes).
  • S405 The application processor 110-1 filters the stripes on the final image to determine the decree lines.
  • process of S405 may be as follows:
  • the first step the application processor 110-1 filters out the fringes in the final image where the number of pixels is less than the preset number of pixels. As shown in (a) of FIG. 12, the application processor 110-1 filters the stripes with a small number of pixels (the area of the white area is small) in the final image to obtain as shown in (b) in FIG. image.
  • the hair (beard) is removed.
  • the application processor 110-1 determines the area where the beard may appear in the image shown in (b) of FIG. 12, such as the white frame area in (b) in FIG. 12 (It should be noted that the white frame area is different from the foregoing
  • the white area is the streak
  • the white area is the pixel value of the pixel is set to white
  • the white frame area is a frame for the reader to understand, the area where the beard may appear in the image).
  • the application processor 110-1 determines all white areas (stripes) that intersect the red frame area; and determines the ratio of the number of pixels of each of the stripes within the red frame area to the total number of pixels of the stripe.
  • the ratio is K / J.
  • Each stripe that intersects the red frame area has a corresponding ratio.
  • the application processor 110-1 determines that the stripe is a beard and deletes the stripe.
  • the preset ratio may be (1-n / m). Where n is the number of all stripes that intersect the red frame, and m is a fixed value, such as 10.
  • the mobile phone 100 can filter out the stripes corresponding to the beard or hair.
  • n the more stripes that intersect the red frame area, and the beard is considered to be more, so the preset ratio is smaller, as long as the ratio corresponding to a certain stripe is greater than the preset ratio, the stripe is deleted, Improve the accuracy of hair (beard) removal.
  • the nose is positioned.
  • the application processor 110-1 is t times (t ⁇ 1, generally 0.2) the width of the image from the right side of the image (such as the white vertical solid line to the right border of the image) Distance, it should be noted that the solid white line is different from the aforementioned white area, that is, the stripe, which is a straight line marked for the reader's understanding.) From bottom to top, find the first stripe with an area greater than a certain preset area, that is For the nose.
  • the compared area occupies a larger area in the image, and in the aforementioned process, after the application processor 110-1 rotates the ROI image horizontally, the nose is on the right side of the image, so the application processor 110-1 is on the right
  • the area determines the position of the alar and improves the accuracy of the alar positioning.
  • the application processor 110-1 may also determine the position of the nose in other ways.
  • the third step is to screen the decree.
  • the application processor 110-1 selects a stripe (one or more) with a length greater than a preset length from the image shown in (c) of FIG. Taking the level above the alar (as shown by the white horizontal dotted line in (d) in FIG. 12) as a boundary, the application process 110-1 determines the stripes located within a certain threshold range above and below the white horizontal dotted line.
  • the white horizontal dashed line in (d) of FIG. 12 may be a vertical line of the white vertical solid line in FIG. 12 (c), or a straight line perpendicular to the white vertical solid line above the nose.
  • the application processor 110-1 determines that there are multiple stripes within a certain threshold range above and below the white horizontal dashed line, the stripes located on the left or upper left of the image can be deleted to finally determine the decree pattern (because the decree pattern The probability of appearing in the lower back of the image is greater than the probability in the upper left).
  • the process of the mobile phone 100 detecting the decree lines of the face in the original image it can also evaluate the detected decree line.
  • the processor 110 in the mobile phone 100 can give a score to the severity of the decree pattern.
  • the scoring process of the processor 110 is as follows:
  • the application processor 110-1 scores the decree pattern based on a preset formula, where the preset formula is:
  • Decree score y w1 * x1 + w2 * x2 + w3 * x3 + w4 * x4 + w5 * x5 + w6 * x6 + b
  • x1 represents the average width of all stripes
  • x2 represents the average length of all stripes
  • x3 represents the average internal and external color contrast of all stripes
  • x4 represents the ratio of the number of pixels of all stripes to the total number of pixels of the ROI image
  • x5 represents the longest stripes
  • the length of x6 represents the width of the longest fringe
  • b represents the offset.
  • the values of w1-w6 and b may be preset, for example, the values of w1-w6 and b may be determined according to the following manner.
  • the designer collects an image and tests the image through the wrinkle detection algorithm provided in the embodiment of the present application.
  • the designer manually scores the decree pattern in the image, that is, the decree pattern score y is known.
  • the values of x1-x6 can be determined. In this case, for an image, corresponding to a y, a group of x1-x6, then each of the multiple images corresponds to a y, a group of x1-x6.
  • the unknown w1-w6, b can be determined through the above process. After determining the values of w1-w6 and b, store them in the internal memory 121 of the mobile phone 100 (as shown in FIG. 1). In the subsequent scoring process, the processor 110 can read from the internal memory 121 as needed Use the values of w1-w6 and b.
  • the above contents are all taken by the mobile phone 100 to detect the decree lines on the human face in the original image.
  • the method can also be applied to the detection of other wrinkles, such as the detection of crow's feet.
  • the wrinkle detection algorithm provided by the embodiment of the present application may be applicable to any scene capable of image acquisition.
  • the camera application of the mobile phone 100 is provided with a wrinkle detection control.
  • the mobile phone 100 uses the above-mentioned wrinkle detection algorithm to detect wrinkles on the collected face image.
  • the mobile phone 100 may also be installed with an app specifically for detecting wrinkles (the mobile phone 100 is shipped from the factory, or the mobile phone 100 is downloaded from the network side during use), and the mobile phone 100 uses the above wrinkles when running the app
  • the detection algorithm detects wrinkles on the face image.
  • the above wrinkle detection algorithm may also be integrated in other apps, such as a beauty camera, etc., which is not limited in the embodiments of the present application.
  • the above wrinkle detection method can also be applied in the field of face unlocking, for example, the face image stored in the mobile phone 100 has a decree line.
  • the mobile phone 100 detects the face image, and can use the above-mentioned wrinkle detection method to detect the decree line in the collected face image, if the decree line matches the decree line in the stored image (current synchronization (Determine whether other parts on the collected image, such as eyes and eyes on the stored image match), then unlock the device, which helps to improve the accuracy of face unlock and improve the security of the device.
  • the above-mentioned wrinkle detection algorithm can also be applied in fields such as face payment, face card punching and the like.
  • face payment as an example, the mobile phone 100 displays a payment verification interface; the payment verification interface displays a framing frame.
  • the mobile phone 100 collects a face image (the face image is displayed in the framing frame) and detects wrinkles on the face image, You can compare the detected wrinkles with the wrinkles in the stored image (currently you can also synchronously compare other parts of the collected image such as whether the eyes match their eyes on the stored image), if they match, then execute the payment process; if If they do not match, a prompt message is output to remind the user that the payment has failed. In this way, the security of payment can be improved.
  • the wrinkle detection algorithm provided in the embodiment of the present application may also be applicable to any scene where wrinkles are detected on an image after receiving an image from another device or network layer.
  • the above wrinkle detection algorithm to detect wrinkles and so on.
  • the method provided by the embodiments of the present application is introduced from the perspective of the terminal device (mobile phone 100) as an execution subject.
  • the terminal may include a hardware structure and / or a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is executed in a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • a terminal device provided by an embodiment of the present application can perform the method in the embodiments shown in FIG. 3 to FIG. 4 described above.
  • the terminal device includes: an image acquisition unit and a processing unit. among them,
  • An image acquisition unit for acquiring an original image including a human face; a processing unit for adjusting the size of the ROI area on the original image to obtain at least two ROI images of different sizes; wherein, the The ROI area is the area where the wrinkles on the human face are located; the processing unit is further configured to process each of the at least two ROI images of different sizes to obtain at least two black and white images; wherein, each black and white image The white area in the image is an area where suspicious wrinkles appear; the processing unit is also used to fuse the at least two black and white images to obtain a final image; the white area on the final image is recognized as a wrinkle.
  • modules / units can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
  • the processing unit may be the processor 110 shown in FIG. 1, or the application processor 110-1 shown in FIG. 2, or other processors.
  • the image acquisition unit may be the camera 193 shown in FIG. 1 or FIG. 2, or may be another image acquisition unit connected to the terminal device.
  • An embodiment of the present application further provides a computer-readable storage medium, which may include a memory, and the memory may store a program, and when the program is executed, the electronic device may be executed as shown in FIGS. 3 to 4 above. All steps described in the method embodiments.
  • Embodiments of the present application also provide a computer program product that, when the computer program product runs on an electronic device, causes the electronic device to perform the method described in FIG. 3 to FIG. 4 All steps.
  • each functional unit in the embodiment of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the first acquiring unit and the second acquiring unit may be the same unit or different units.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the term “when” may be interpreted to mean “if " or “after” or “in response to a determination " or “in response to detection ".
  • the phrase “when determined” or “if detected (the stated condition or event)” may be interpreted to mean “if determined " or “in response to determination " or “detected (The stated condition or event) “or” in response to the detection (the stated condition or event) ".
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transferred from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a solid-state hard disk), or the like.
  • the method provided by the embodiments of the present application is introduced from the perspective of the terminal device as an execution subject.
  • the terminal device may include a hardware structure and / or a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is executed in a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application of the technical solution and design constraints.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

一种皱纹检测方法和终端设备,该方法包括:终端设备获取原始图像,所述原始图像中包括人脸;终端设备调整所述原始图像上的ROI区域的尺寸,得到至少两张不同尺寸的ROI图像;其中,所述ROI区域为所述人脸上皱纹所在区域;终端设备对所述至少两张不同尺寸的ROI图像中每张ROI图像进行处理,得到至少两张黑白图像;其中,每张黑白图像中的白色区域为皱纹可疑出现区域;终端设备将所述至少两张黑白图像融合,得到最终图像;所述最终图像上的白色区域被识别为皱纹。通过这种方式,终端设备(比如手机、ipad)可以实现皮肤检测功能,只需采集包括人脸的图像,即可使用该皱纹检测方法检测皱纹,方便操作,提升用户体验。

Description

一种皱纹检测方法和终端设备
本申请要求于2018年11月19日提交中国国家知识产权局、申请号为201811375119.1、申请名称为“一种皱纹检测方法和终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种皱纹检测方法和终端设备。
背景技术
随着人们生活品质的提升,越来越多的人尤其是女性开始关注自身的皮肤状况。其中,女性关注较多的是脸部的皮肤状况,比如眼角是否有鱼尾纹、面部是否有法令纹等,并且会根据这些皮肤状况选择使用不同的保养产品。
目前的市场上,虽然存在一些皮肤检测设备,例如皮肤检测仪等,但是这些皮肤检测设备的价格比较昂贵,并且操作非常复杂,需要在专业人员的指导下才能使用,普适性较差。
发明内容
本申请实施例提供一种皱纹检测方法和终端设备,用以自动检测用户皮肤上的皱纹,方便用户操作,提升用户体验。
第一方面,本申请实施例提供一种皱纹检测方法,该方法可由终端设备执行。该方法包括:终端设备获取原始图像,所述原始图像中包括人脸;所述终端设备调整所述原始图像上的ROI区域的尺寸,得到至少两张不同尺寸的ROI图像;其中,所述ROI区域为所述人脸上皱纹所在区域;所述终端设备对所述至少两张不同尺寸的ROI图像中每张ROI图像进行处理,得到至少两张黑白图像;其中,每张黑白图像中的白色区域为皱纹可疑出现区域;所述终端设备将所述至少两张黑白图像融合,得到最终图像;所述最终图像上的白色区域被识别为皱纹。
在本申请实施例中,终端设备(比如手机、ipad)可以实现皮肤检测功能。终端设备只需采集包括人脸的图像,即可使用上述皱纹检测方法检测皱纹,方便操作,提升用户体验,而且是对一张图像(包括人脸)上的ROI区域做至少两次不同处理,提高皱纹检测准确性。
在一种可能的设计中,所述终端设备对所述至少两张不同尺寸的ROI图像中每张ROI图像进行处理,得到至少两张黑白图像,包括:针对每张ROI图像,分别使用预设的至少一个矩阵重复执行如下步骤:所述终端设备使用预设矩阵覆盖在所述ROI图像上,确定所述ROI图像上与预设矩阵中的每个矩阵元素对应的像素点的像素值;所述终端设备确定所述每个矩阵元素和与所述每个矩阵元素对应的像素点的像素值的乘积;所述终端设备将每个矩阵元素对应的乘积求和;所述和为所述矩阵在所述ROI图像上覆盖的图像块的中心位置的像素值;若所述图像块的中心位置的像素值大于预设像素 值,所述终端设备将所述中心位置设置为黑色,若所述图像块的中心位置的像素值小于等于所述预设像素值,所述终端设备将所述中心位置设置为白色。
在本申请实施例中,终端设备采用预设矩阵对每张ROI图像进行处理,确定每张ROI图像上的不同图像块的中心位置的像素值,将像素值较高的中心位置设置为黑色,将像素值较高低的中心位置设置为白色。这样,终端设备得到的黑白图像中白色区域即为皱纹可疑出现区域,通过这种方式,提高皱纹检测准确性。
在一种可能的设计中,在所述终端设备将所述至少两张黑白图像融合,得到最终图像之前,所述终端设备还确定所述至少两张黑白图像中有M张图像在同一位置处存在白色区域,则删除所述M张图像中位于所述位置处的白色区域;其中,M小于等于预设值。
在本申请实施例中,终端设备融合至少两张黑白图像之前,还可以删除一些符合条件的白色区域(比如只有一张黑白图像上的某个位置上有白色区域,在其它黑白图像上的这个位置处没有白色区域,则删除该白色区域)。通过这种方式,提高皱纹检测准确性。
在一种可能的设计中,若所述皱纹为法令纹,所述最终图像上的白色区域被识别为皱纹之后,所述终端设备确定所述最终图像上胡须所在区域;所述终端设备确定与所述胡须所在区域相交的n个白色区域;所述终端设备确定所述n个白色区域中的第一白色区域位于所述胡须所在区域内的像素点个数和所述第一白色区域中所有像素点个数的比值;若所述比值大于等于预设比值,所述终端设备从所述最终图像中删除所述第一白色区域,所述最终图像上剩余的白色区域被识别为法令纹。
在本申请实施例中,终端设备识别出皱纹之后,还可以进一步识别法令纹。因此终端设备从识别出的皱纹中筛选法令纹(比如根据胡须所在区域筛选可能是法令纹的白色区域)。通过这种方式,提升识别法令纹的准确性。
在一种可能的设计中,若所述皱纹为法令纹,所述最终图像上的白色区域被识别为皱纹之后,所述终端设备确定所述最终图像中的鼻翼所在坐标位置;所述终端设备在所述最终图像中删除距离所述坐标位置预设距离范围内、且长度大于预设长度的白色区域,所述最终图像上剩余的白色区域被识别为法令纹。
在本申请实施例中,终端设备还可以根据鼻翼的位置确定可能是法令纹的白色区域,提升检测法令纹的准确性。
在一种可能的设计中,所述预设比值为1-n/m,其中,m为预设的固定值。
在一种可能的设计中,在所述终端设备调整所述ROI图像的尺寸,得到至少两张不同尺寸的ROI图像之前,所述终端设备将所述ROI图像转换成灰度图像;所述终端设备将所述灰度图像作水平调整;所述终端设备对水平调整后的图像作去燥处理。
在本申请实施例中,终端设备在调整ROI图像的尺寸之前,还可以对ROI图像进行预处理,预处理过程包括灰度处理、水平调整、去燥处理等,通过预处理,提升皱纹检测的准确性。
在一种可能的设计中,所述最终图像上的白色区域被识别为皱纹之后,所述终端设备基于如下公式确定所述白色区域的评价结果y;
y=w1*x1+w2*x2+w3*x3+w4*x4+w5*x5+w6*x6+b
其中,x1代表所述白色区域的平均宽度;x2代表所述白色区域的平均长度;x3代表所述白色区域的平均内外颜色对比度;x4代表所述白色区域的像素点个数占所述ROI图像像素点总数的比值;x5、x6分别代表所述白色区域中最长的白色区域的长度和宽度;b代表偏移量。
在本申请实施例中,终端设备检测处皱纹之后,可以对皱纹评价。通过上述公式,可以较为准确的确定皱纹的评价结果。
在一种可能的设计中,在终端设备获取原始图像之前,所述终端设备检测到第一操作,运行第一应用,打开摄像头,显示取景界面;所述终端设备在所述最终图像中识别出皱纹之后,在所述取景界面中显示提示信息,所述提示信息用于提示所述皱纹在人脸中的位置。
在本申请实施例中,终端设备可以将皱纹检测功能集成在第一应用中,第一应用可以是终端设备自带的应用比如相机应用或者单独用于检测皮肤的应用,也可以是终端设备在使用过程中从网络侧下载的应用。终端设备识别出皱纹之后,可以提示用户皱纹在人脸中的位置。通知这种方式,方式人尤其是女性管理皮肤,方便操作,提升用户体验。
在一种可能的设计中,在终端设备获取原始图像之前,所述终端设备处于锁屏状态;所述终端设备在所述最终图像中识别出皱纹之后,所述终端设备将所述皱纹与预存的图像中的皱纹比较;若一致,所述终端设备解锁屏幕。
在本申请实施例中,皱纹检测功能可以应用在人脸解锁领域。当终端设备采集到图像,然后识别出图像中的皱纹后,可以预测的图像中的皱纹比较,若一致,则解锁设备。通过这种方式,提高设备安全性。
在一种可能的设计中,在终端设备获取原始图像之前,所述终端设备显示支付验证界面;
所述终端设备在所述最终图像中识别出皱纹之后,所述终端设备将所述皱纹与预存的图像中的皱纹比较;若一致,所述终端设备执行支付流程。
在本申请实施例中,皱纹检测功能可以应用在刷脸支付领域。当终端设备采集到图像,然后识别出图像中的皱纹后,可以预测的图像中的皱纹比较,若一致,则执行支付流程。通过这种方式,提高支付安全性。
在一种可能的设计中,当所述终端设备未检测到皱纹时,输出提示信息,以提示用户未检测到皱纹。
在本申请实施例中,终端设备未识别出皱纹时,也可以提示用户未识别出皱纹。通知这种方式,方式人尤其是女性管理皮肤,方便操作,提升用户体验。
第二方面,本申请实施例还提供一种终端设备。该终端设备包括摄像头、处理器和存储器;所述摄像头:用于采集原始图像,所述原始图像中包括人脸;所述存储器用于存储一个或多个计算机程序;当所述存储器存储的一个或多个计算机程序被所述处理器执行时,使得所述终端设备能够实现第一方面及其第一方面任一可能设计的技术方案。
第三方面,本申请实施例还提供了一种终端设备,所述终端设备包括执行第一方面或者第一方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬 件实现,也可以通过硬件执行相应的软件实现。
第四方面,本申请实施例的一种芯片,所述芯片与电子设备中的存储器耦合,执行本申请实施例第一方面及其第一方面任一可能设计的技术方案;本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
第五方面,本申请实施例的一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在终端设备上运行时,使得所述终端设备执行本申请实施例第一方面及其第一方面任一可能设计的技术方案。
第六方面,本申请实施例的中一种计算机程序产品,包括指令,当所述计算机程序产品在终端设备上运行时,使得所述终端设备执行本申请实施例第一方面及其第一方面任一可能设计的技术方案。
附图说明
图1为本申请实施例提供的一种手机100的示意图;
图2为本申请实施例提供的手机100的示意图;
图3为本申请实施例提供的皱纹检测方法的流程示意图;
图4为本申请实施例提供的皱纹检测方法的流程示意图;
图5为本申请实施例提供的原始图像上法令纹检测区域的示意图;
图6为本申请实施例提供的从原始图像上分割出的ROI图像的示意图;
图7为本申请实施例提供的对ROI图像进行预处理器的流程示意图;
图8为本申请实施例提供的对ROI图像的尺寸调整的流程示意图;
图9为本申请实施例提供的矩阵覆盖ROI图像的一个图像块的示意图;
图10为本申请实施例提供的一种15*15矩阵的示意图;
图11为本申请实施例提供的最终图像的示意图;
图12为本申请实施例提供的从最终图像中筛选出法令纹的流程示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
本申请实施例涉及的应用(application,app),为能够实现某项或多项特定功能的计算机程序。通常,终端设备中可以安装多个应用程序。比如,相机应用、短信应用、彩信应用、各种邮箱应用、聊天软件应用、WhatsApp Messenger、连我(Line)、照片分享(instagram)、Kakao Talk、钉钉应用等。下文中提到的应用,可以是终端出厂时自带的应用,也可以是用户在使用终端的过程中从网络侧下载的应用。本申请实施例提供的皱纹检测功能可以集成在一个或多个应用中,比如集成在相机应用或者微信应用中。以相机应用为例,终端设备启动相机应用,显示取景界面,取景界面中可以包括一控件,该控件被激活时,终端设备可以启动本申请实施例提供的皱纹检测功能。以微信应用为例,终端设备显示微信的表情包制作界面时,该表情包制作界面中显示一控件,当该控件被激活时,终端设备也可以启动本申请实施例提供的皱纹检测功能。
本申请实施例涉及的像素,为一张图像上的最小成像单元。一个像素可以对应图像上的一个坐标点。像素可以包括一个参数(比如灰度),也可以是多个参数的集合(比如灰度、亮度、颜色等)。如果像素包括一个参数,那么像素值就是该参数的取值,如果像素是多个参数的集合,那么像素值包括所述集合中每个参数的取值。
本申请实施例涉及的原始图像,是摄像头中的镜头组采集待拍摄物体(比如人脸)反射的光信号后,根据所述光信号生成待拍摄物体的图像。即原始图像是包括人脸的图像,但该图像未经过处理。
本申请实施例涉及的感兴趣(region of interest,ROI)区域,是终端设备从原始图像中确定的部分区域,该部分区域是皱纹所在区域,被称为ROI区域。以原始图像是人脸图像为例,终端设备确定原始图像中的法令纹所在区域,该法令纹所在区域即ROI区域。
本申请实施例涉及的ROI图像,上述的ROI区域是原始图像上的部分区域,但ROI图像是终端设备将ROI区域从原始图像上分割下来的图像(ROI区域构成的图像)。需要说明的是,本申请实施例不限定皱纹所在区域的名称,比如皱纹所在区域除了可被称为ROI区域还可以被称为其它名称,当然也不限定ROI图像的名称。
需要说明的是,原始图像或者上述ROI图像可以作为本申请实施例提供的皱纹检测算法的输入图像。通过本申请实施例提供的皱纹检测算法检测输入图像中的皱纹。
本申请实施例涉及的多个,是指大于或等于两个。
需要说明的是,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。且在本发明实施例的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
以下介绍终端设备、用于这样的终端设备的图形用户界面(graphical user interface,GUI)、和用于使用这样的终端设备的实施例。在本申请一些实施例中,终端设备可以是便携式设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴设备(如智能手表)等。便捷式终端具体图像采集功能和算法运算能力(能够运行本申请实施例提供的皱纹检测算法)。便携式设备的示例性实施例包括但不限于搭载
Figure PCTCN2019117904-appb-000001
Figure PCTCN2019117904-appb-000002
或者其它操作系统的便携式设备。上述便携式设备也可以是其它便携式设备,只要能够实现图像采集功能和算法运算能力(能够运行本申请实施例提供的皱纹检测算法)即可。还应当理解的是,在本申请其他一些实施例中,上述终端设备也可以不是便携式设备,而是能够实现图像采集功能和算法运算能力(能够运行本申请实施例提供的皱纹检测算法)的台式计算机。
在本申请另一些实施例中,终端设备还可以具有算法运算能力(能够运行本申请实施例提供的皱纹检测算法)和通信功能,而无需具有图像采集功能。比如,终端设备接收其它设备发送的图像,然后运行本申请实施例提供的皱纹检测算法检测所述图像中的皱纹。在下文中,以终端设备自身具有图像采集功能和算法运算功能为例。
以终端设备是手机为例,图1示出了手机100的结构示意图。
手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行 总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块151,无线通信模块152,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K(当然,手机100还可以包括其它传感器,比如压力传感器、加速度传感器、陀螺仪传感器、环境光传感器、骨传导传感器等,图中未示出)。
可以理解的是,本发明实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
其中,处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。处理器110可以运行本申请实施例提供的皱纹检测算法,以检测图像上的皱纹。
手机100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。
摄像头193(前置摄像头或者后置摄像头)用于捕获静态图像或视频。通常,摄像头193可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体(比如人脸)反射的光信号,并将采集的光信 号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的图像(比如人脸图像)。以人脸图像为例,摄像头193采集到人脸图像后,可以将人脸图像发送给处理器110,处理器110运行本申请实施例提供的皱纹检测算法,检测人脸图像上的皱纹。处理器110确定出人脸图像上的皱纹之后,显示屏194可以显示该皱纹的提示信息,该提示信息用于提示用户皱纹存在,或者提示用户皱纹的所在位置等等。
另外,图1所示的摄像头193可以包括1-N个摄像头。如果包括一个摄像头(或者包括多个摄像头,但是同一时刻只有一个摄像头开启),手机100对该摄像头(或者当前时刻开启的摄像头)采集的人脸图像进行皱纹检测即可。如果包括多个摄像头,且多个摄像头同时开启,手机100可以对每个摄像头(开启的摄像头)采集的人脸图像均进行皱纹检测。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用,微信应用等)的代码等。存储数据区可存储手机100使用过程中所创建的数据(比如相机应用采集的图像、视频等)等。内部存储器121还可以存储本申请实施例提供的皱纹检测算法的代码。当内部存储器121中存储的皱纹检测算法的代码被处理器110运行时,实现皱纹检测功能。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的皱纹检测算法的代码还可以存储在外部存储器中。这种情况下,处理器110可以通过外部存储器接口120运行存储在外部存储器中的皱纹检测算法的代码,以实现相应的皱纹检测功能。
下面介绍传感器模块180的功能。
距离传感器180F,用于测量距离。手机100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,手机100可以利用距离传感器180F测距以实现快速对焦。在另一些实施例中,手机100还可以利用距离传感器180F检测是否有人或物体靠近。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机100通过发光二极管向外发射红外光。手机100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机100附近有物体。当检测到不充分的反射光时,手机100可以确定手机100附近没有物体。手机100可以利用接近光传感器180G检测用户手持手机100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,手机100利用温度传感器180J检测的温度,执行温度处理策略。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194, 由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。
手机100的无线通信功能可以通过天线1,天线2,移动通信模块151,无线通信模块152,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。终端设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块151可以提供应用在终端设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块151可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块151可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块151的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块151的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块152可以提供应用在终端设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块152可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块152经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块152还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在本申请一些实施例中,手机100可以通过无线通信模块151或者无线通信模块152接收其它设备发送的人脸图像,然后运行本申请实施例提供的皱纹检测算法,检测人脸图像中的皱纹。这种情况下,手机100本身可以不具有图像采集功能。
另外,手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。手机100 可以接收按键190输入,产生与手机100的用户设置以及功能控制有关的键信号输入。手机100可以利用马达191产生振动提示(比如来电振动提示)。手机100中的指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。手机100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。
应理解,在实际应用中,手机100可以包括比图1所示的更多或更少的部件。
为了方便描述本申请实施例提供的皱纹检测算法,下文将通过与本申请实施例提供的皱纹检测算法相关的部件介绍本申请实施例的皱纹检测算法,具体请参见图2,图2中的部件可参考关于图1的相关描述。需要说明的是,在图2中,以处理器110集成应用处理器110-1为例。
在本申请一些实施例中,通过图2所示的手机100检测皱纹可以是如下的过程:
以皱纹检测功能集成于相机应用中为例,手机100的显示屏194(请参见图1所示)显示主界面,主界面中包括多个应用(比如相机应用、微信应用等)的图标。用户通过触摸传感器180K点击主界面中相机应用的图标,触发应用处理器110-1启动相机应用,打开摄像头193,显示器194显示相机应用的界面,例如取景界面。摄像头193中的镜头组193-1采集待拍摄物体(比如人脸)反射的光信号,并将采集的光信号传递给图像传感器193-2。图像传感器193-2根据所述光信号生成待拍摄物体的原始图像(原始图像中包括人脸)。
图像传感器193-2将原始图像发送给应用处理器110-1。应用处理器110-1运行本申请实施例提供的皱纹检测算法的代码(比如,应用处理器110-1运行存储在内部存储器121中的皱纹检测算法的代码),检测原始图像中的皱纹。应用处理器110-1检测到皱纹之后,可以输出提示信息(比如在取景界面中显示文字信息),该提示信息可以用于标注原始图像上的皱纹所在位置等。当然,应用处理器110-1未检测到皱纹后,也可以输出提示信息,用于提示用户未检测到皱纹。
应理解,本申请实施例提供的皱纹检测方法可以适用于任意身体部分(比如面部)上的皱纹(比如面部的法令纹、鱼尾纹等)检测。下文中将以检测面部的法令纹为例进行说明。
请参见图3所示,为应用处理器110-1运行皱纹检测算法的代码检测皱纹的流程。如图3所示,应用处理器110-1从原始图像(包括人脸)中分割法令纹检测区域,得到ROI图像。应用处理器110-1调整ROI图像的尺寸,得到至少两张不同尺度的ROI图像(图3中以3张为例)。应用处理器110-1根据直线算子检测算法对每张不同尺度的ROI图像进行处理,得到三张黑白图像,每张黑白图像中包括条纹。应用处理器110-1将三张黑白图像融合,得到最终图像,最终图像中包括至少一条条纹。应用处理器110-1从最终图像中筛选出一条或多条条纹,该一条或多条条纹即法令纹。
通过以上描述可知,本申请实施例提供的皱纹检测方法,无需繁琐的操作流程,只需拍一张包括人脸的图像即可检测皱纹,方便操作;且皱纹检测功能可以集成在手机、ipad等便捷式终端设备上,普适性强。
下面继续以检测原始图像上的法令纹为例,介绍应用处理器110-1运行本申请实施例提供的皱纹检测算法的代码,检测原始图像上的皱纹的过程。请参见图4所示, 为本申请实施例提供的皱纹检测方法的流程示意图。如图4所示,该流程包括:
S401:应用处理器110-1确定原始图像上的ROI区域,所述ROI区域为法令纹检测区域。
可选的,S301可以通过以下几步实现:
第一步,应用处理器110-1根据关键点检测算法确定原始图像上的关键点;所述关键点用于指示原始图像中人脸的特征部位的点,特征部位包括人脸的眼睛、眉毛、鼻子、嘴巴、脸部轮廓等。
其中,关键点检测算法可以是基于深度学习的人脸关键点检测算法或者其他算法,本申请实施例不限定。
请参见图5所示,为本申请实施例提供的原始图像上关键点的示意图。如图5中(a)所示,原始图像上的多个关键点(应用处理器110-1可以为每个关键点标号)分布于特征部位(眼睛、眉毛、鼻子、嘴巴、脸部轮廓)处。
第二步,应用处理器110-1根据关键点确定法令纹检测区域。
请继续参见图5中的(a)所示,应用处理器110-1根据关键点44、48、54、14、13、128等,确定法令纹检测区域,即ROI区域,即图5中的(b)中椭圆框中的区域。
上述的第一步和第二步介绍应用处理器110-1确定原始图像上的法令纹检测区域的一种可能的实现方式。在实际应用中,应用处理器110-1还可以有其它的方式来确定图像上的法令纹检测区域,本申请实施例不限定。
S402:应用处理器110-1从原始图像上分割出ROI区域,得到ROI图像,并调整ROI图像的尺寸,得到至少两张不同尺寸的ROI图像。
请参见图5中的(b)所示,应用处理器110-1确定出ROI区域(椭圆框内区域)后,可以将ROI区域分割出来,得到一张ROI图像,请参见图6所示(通常,图6中的ROI图像是彩色的)。这样的话,在后续过程中,应用处理器110-1只需处理分割出来的区域即可,无需对整张图像进行处理,可以节省计算量。
应用处理器110-1分隔出ROI区域得到一张ROI图像后,可以对该ROI图像进行缩放,得到至少两张不同尺寸的ROI图像。
可选的,应用处理器110-1还可以对ROI图像进行预处理。ROI图像的预处理的过程可以在应用处理器将ROI图像分割出来之前,或者之后执行(若在将ROI图像分割出来之前执行,可以只对ROI区域进行预处理,也可以对整张图像进行预处理)。当然,应用处理器110-1也可以在应用处理器110-1对ROI图像进行缩放,得到至少两张不同尺寸的ROI图像后,对每张不同尺寸的ROI图像作预处理。
这里以应用处理器110-1将ROI图像分割出来后,对每个ROI图像预处理为例,应用处理器110-1对ROI图像的预处理过程可以如下:
请参见图7所示,为本申请实施例提供的ROI图像的预处理流程示意图。
第一步,应用处理器110-1对ROI图像作灰度处理,即将ROI图像由彩色图像转换为灰度图像(如前述内容图6中的ROI图像是彩色的,所以图7中第一步是将彩色的ROI图像转换为灰度的)。第二步,应用处理器110-1将ROI图像作水平调整。第三步,应用处理器110-1对ROI图像作去噪处理。在第三步中,应用处理器110-1可以采用现有技术的方式比如高斯滤波法对ROI图像进行滤波处理。
图7只是示出应用处理器110-1对ROI图像进行预处理的一种示例,在实际应用中,除了图7所示的预处理步骤,还可以包括其他预处理步骤,比如过滤ROI图像中毛孔、微小毛发的影响等,本申请实施例不限定。
S403:应用处理器110-1对每张尺寸不同的ROI图像做直线算子滤波处理,得到至少两张黑白图像。
如前述内容可知,应用处理器110-1将ROI图像的尺寸调整,得到至少两张不同尺寸的ROI图像。下面以应用处理器110-1调整ROI图像的尺寸,得到三张不同尺寸的ROI图像为例。请参见图8所示,应用处理器110-1对三张不同尺寸的ROI图像中每张ROI图像作直线算子滤波处理,得到三张黑白图像。
下面以三张不同尺寸的ROI图像中的一张ROI图像为例,介绍应用处理器110-1对这张ROI图像,作直线算子滤波处理,得到一张黑白图像的过程,另外两张黑白图像的处理过程相同,这里不再重复赘述。
请参见图9所示,为本申请实施例提供的对ROI图像作直线算子滤波处理的示意图。具体而言,对ROI图像作直线算子滤波处理的流程如下:
第一步,应用处理器110-1将直线算子(预设矩阵)“设置”在ROI图像上,即预设矩阵“覆盖”ROI图像上的一个图像块。
如图9所示,直线算子是矩阵形式,该矩阵中每个矩阵元素可以对应一个坐标位置。比如,如图9所示,矩阵为3*3矩阵。其中,第1行第1列的矩阵元素(即取值为1)对应坐标(x1、y1),第1行第2列的矩阵元素(即1)对应坐标(x2、y1),以此类推。
第二步,应用处理器110-1确定每个矩阵元素的位置坐标对应的像素值。
请继续参见图9所示,矩阵中每个矩阵元素对应该矩阵所覆盖的图像块上的一个像素点。比如,第1行第1列的矩阵元素对应图像块上坐标是(x1、y1)的像素点;第1行第2列的矩阵元素对应图像块上坐标是(x2、y1)的像素点,以此类推。因此,应用处理器110-1确定与第1行第1列的矩阵元素对应的像素值(假设该像素值为p11),确定与第1行第2列的矩阵元素对应的像素值(假设该像素值为p12),以此类推。
第三步,应用处理器110-1将每个矩阵元素的取值和与该矩阵元素对应的像素值相乘。比如,第1行第1列的矩阵元素的取值为1,该矩阵元素对应的像素值为p11,则乘积为1*p11;第1行第2列的矩阵元素的取值为1,该矩阵元素对应的像素值为p12,则乘积为1*p12,以此类推,应用处理器110-1会得到3*3=9个乘积。
第四步,应用处理器110-1将9个乘积求和,得到该矩阵所覆盖的图像块上中心位置的像素值。
其中,第三步和第四步的公式如下:
1*p11+1*p12+1*p13+0*p21+0*p22+0*p23+1*p31+1*p32+1*p33
其中,p11、p12、p13、p21、p22、p23、p31、p32、p33分别是每个矩阵元素对应的像素点的像素值。
通过上述第一步到第四步,应用处理器110-1可以确定矩阵(直线算子)所覆盖的图像块的中心位置的像素值。
需要说明的是,上述第一步到第四步是为矩阵是3*3矩阵为例,实际上,矩阵可 以是n*n矩阵(n为3、5、7、9、11、15等大于等于3的奇数)。请参见图10所示,为15*15矩阵的示意图。其中,每一行的矩阵元素的取值相同,比如第一行都是1。从矩阵中心行(比如第8行)向其它行,矩阵元素的取值增加,即中心行上的矩阵元素的取值最小。
以上四步介绍了应用处理器110-1将矩阵设置在ROI图像上某一个位置处时,确定该矩阵所覆盖的图像块的中心位置的像素值的过程。当矩阵从该位置移动到下一位置时,该矩阵会重新覆盖另一个图像块。应用处理器110-1可以按照上述类似的方式确定下一个图像块中心位置的像素值。因此,应用处理器110-1会得到多个由矩阵覆盖不同图像块所确定出的中心位置的像素值。
第五步,应用处理器110-1将多个中心位置的像素值中大于预设像素值(比如300)的坐标点设置为黑色,将小于等于预设像素值的坐标点设置为白色。因此,应用处理器110-1将ROI图像转换为黑白图像。
需要说明的是,在上述过程中,是以对ROI图像进行缩放,保持矩阵不变来确定ROI图像上不同图像块的中心位置的像素点。在实际应用中,还可以保持ROI图像不变,即不调整ROI图像的尺寸,而改变矩阵比如手机100中可以存储有多个矩阵,3*3矩阵、5*5矩阵和15*15矩阵,然后分别采用3*3矩阵、5*5矩阵和15*15矩阵执行上述过程(对同一张ROI图像采用三个矩阵分别执行)。
以上介绍了手机100对图8中三张不同尺寸的ROI图像中的一张ROI图像进行的处理后得到的黑白图像的过程,对于其他两张ROI图像,也采用类似的方式,得到两张黑白图像。因此,应用处理器110-1共得到三张黑白图像(请参见图8所示)。每张黑白图像上的白色区域(下文称之为条纹)即为法令纹可能出现的区域。
S404:应用处理器110-1将至少两张黑白图像融合,得到一张最终图像。
应用处理器110-1在融合图像的过程中,若至少两张黑白图像中有M张(M大于等于2)黑白图像在同一位置上均出现条纹,则保留该条纹。若至少两张黑白图像中只有一张黑白图像在某个位置上出现条纹,在其它黑白图像上在该位置处没有条纹,则删除该条纹。因此,应用处理器110-1得到的一张最终图像上包括至少一条条纹。请参见图11所示,为本申请实施例提供的最终图像的示意图,如图11所示,图像中包括白色区域(条纹)。
S405:应用处理器110-1对最终图像上的条纹进行筛选,确定法令纹。
可选的,S405的过程可以如下:
第一步:应用处理器110-1过滤掉最终图像中像素点个数小于预设像素点个数的条纹。如图12中的(a)所示,应用处理器110-1将最终图像中像素点个数较少(白色区域面积较小)的条纹过滤,得到如图12中的(b)所示的图像。
第二步,毛发(胡须)去除。应用处理器110-1在图12中的(b)所示的图像中确定胡须可能出现的区域,比如图12中(b)中的白色框区域(需要说明的是,白色框区域不同于前述的白色区域即条纹,前述白色区域是将像素点的像素值设置为白色的,而白色框区域是为了方便读者理解,将图像中胡须可能出现的区域标注出来的框)。应用处理器110-1确定与该红框区域相交的所有白色区域(条纹);并确定所述所有条纹中每条条纹在该红框区域内部的像素数占该条纹所有像素个数的比值。比如,某 一个条纹与红框区域相交,该条纹位于红框区域内的像素点个数为K,该条纹内的所有像素点个数为J,则比值为K/J。每个与红框区域相交的条纹具有对应一个比值。
若一个条纹对应的比值大于预设比值,则应用处理器110-1确定该条纹是胡须,删除该条纹。其中,预设比值可以是(1-n/m)。其中n为所有与该红框相交的条纹的个数,m为一个固定值,例如10。
需要说明的是,通常,手机100采集的原始图像包括人脸时,在人脸的法令纹区域会有胡须或毛发。原始图像被转成成黑白图像后,胡须或毛发转换成条纹(白色区域)。因此,为了提高法令纹检测的准确性,手机100可以过滤掉胡须或毛发对应的条纹。上述阈值比值的公式中,n越大,即与红框区域相交的条纹越多,认为胡须较多,所以预设比值越小,只要某条纹对应的比值大于预设比值,就删除该条纹,提升毛发(胡须)去除的准确性。
第二步,鼻翼定位。如上图12中的(c)所示,应用处理器110-1在距离图像右侧t倍(t<1,一般取0.2)图像宽的距离(如白色竖直实线到图像右侧边框的距离,需要说明的是白色实线不同于前述的白色区域即条纹,是为了方便读者理解,而标注出来的直线),从下往上,找到第一个面积大于一定预设面积的条纹,即为鼻翼。
通常比较的面积在图像中所占的面积较大,而且在前述过程中,应用处理器110-1将ROI图像水平旋转后,鼻翼位于图像右侧,所以应用处理器110-1在图像右侧区域确定鼻翼位置,提升鼻翼定位的准确性。当然,在实际应用中,应用处理器110-1还可以有其他方式确定鼻翼的位置。
第三步,法令纹筛选。应用处理器110-1从图12中(c)所示的图像中选出长度大于预设长度的条纹(一个或多个)。以鼻翼上方水平(如图12中(d)白色水平虚线所示)为界,应用处理110-1确定位于白色水平虚线上下一定阈值范围内的条纹。其中,图12中(d)中的白色水平虚线可以是图12中(c)白色竖直实线的中垂线,或者鼻翼上方与白色竖直实线垂直的直线。
可选的,当应用处理器110-1确定位于白色水平虚线上下一定阈值范围内的条纹有多条时,可以删除位于图像左侧或者左上方的条纹,最终确定出法令纹(因为,法令纹出现在图像后下方的概率大于左上方的概率)。
以上内容描述手机100检测原始图像中人脸的法令纹的过程。在本申请另一些实施例中,手机100检测到法令纹之后,还可以对检测到的法令纹评价。比如,手机100中处理器110可以对法令纹的严重程度给出打分。作为一种示例,处理器110的打分过程如下:
应用处理器110-1基于预设公式对法令纹打分,其中,预设公式为:
法令纹得分y=w1*x1+w2*x2+w3*x3+w4*x4+w5*x5+w6*x6+b
其中,x1代表所有条纹平均宽度;x2代表所有条纹平均长度;x3代表所有条纹平均内外颜色对比度;x4代表所有条纹像素点个数占ROI图像像素点总个数的比值;x5代表最长的条纹的长度;x6代表最长的条纹的宽度;b代表偏移量。
其中,w1-w6、b的取值可以是预先设置好的,比如,可以根据如下方式确定w1-w6、b的取值。
在手机100出厂之前,设计人员采集一张图像,通过本申请实施例提供的皱纹检 测算法对该图像进行测试。设计人员人工为该图像中法令纹打分,即法令纹得分y已知。通过上述皱纹检测方法检测出一条或多条条纹(法令纹)后,可以确定x1-x6的取值。这样的话,对于一张图像,对应一个y、一组x1-x6,那么多张图像中每张图像对应一个y、一组x1-x6。对于第i张图像令Xi=[x1,x2,…,x6,1]^T是一个7x1的向量,T表示已知的转置矩阵,yi表示此图的法令纹得分,W=[w1,w2,…,w6,b]^T。假设有k张图像,得到矩阵X=[X1,X2,….,Xk]^T是一个kx7的样本矩阵(已知的),对应的得分Y=[y1,y2,…,yk]^T(已知的),求解方程Y=X*W,可得W=(X^T*X)^(-1)*X^T*y。
因此,通过上述过程可以确定未知的w1-w6、b。确定w1-w6、b的取值后,将其存储到手机100中的内部存储器121中(如图1所示),在后续打分过程中处理器110就可以根据需要从内部存储器121中读取w1-w6、b的取值进行使用。
需要说明的是,以上的内容都是以手机100检测原始图像中人脸上的法令纹为例的,实际上,该方法还可以适用于其他皱纹的检测,比如鱼尾纹的检测等。
本申请实施例提供的皱纹检测算法可以适用于能够进行图像采集的任何场景。比如,手机100的相机应用中设置有皱纹检测控件,当该控件被激活时,手机100采用上述皱纹检测算法检测采集的人脸图像上的皱纹。或者,手机100中也可以安装有专门用于检测皱纹的app(手机100出厂时自带的,或者手机100在使用过程中,从网络侧下载的),手机100运行该app时,采用上述皱纹检测算法检测人脸图像上的皱纹。或者,上述皱纹检测算法还可以集成在其它app中,比如美颜相机等,本申请实施例不作限定。
当然,上述皱纹检测方法还可以应用在人脸解锁领域,比如手机100中存储的人脸图像中具有法令纹。当用户要解锁手机100时,手机100检测到人脸图像,可以采用上述皱纹检测方法检测采集的人脸图像中的法令纹,若法令纹与存储的图像中的法令纹匹配(当前还可以同步确定采集的图像上的其它部位比如眼睛与存储的图像上的眼睛是否匹配),则解锁设备,有助于提高人脸解锁的准确性,提高设备安全性。
上述皱纹检测算法还可以应用刷脸支付、刷脸打卡等领域。以刷脸支付为例,手机100显示支付验证界面;支付验证界面中显示取景框,当手机100采集到人脸图像(取景框中显示人脸图像),并检测出人脸图像上皱纹之后,可以将检测出的皱纹与存储的图像中的皱纹比较(当前还可以同步比较采集的图像上的其它部位比如眼睛与存储的图像上的其眼睛是否匹配),若匹配,则执行支付流程;若不匹配,则输出提示信息,以提示用户支付失败。这种方式中,可以提升支付的安全性。
本申请实施例提供的皱纹检测算法还可以适用于从其它设备或者网络层接收到图像后,对该图像检测皱纹的任何场景。比如,微信的聊天记录中,收到其他联系人发送的图像后,可以使用上述皱纹检测算法检测皱纹等等。
本申请的各个实施方式可以任意进行组合,以实现不同的技术效果。
上述本申请提供的实施例中,从终端设备(手机100)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,终端可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、 还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
基于相同的构思,本申请实施例提供的一种终端设备,该终端设备可以执行上述图3-图4所示实施例中的方法。该终端设备包括:图像采集单元和处理单元。其中,
图像采集单元,用于获取原始图像,所述原始图像中包括人脸;处理单元,用于调整所述原始图像上的ROI区域的尺寸,得到至少两张不同尺寸的ROI图像;其中,所述ROI区域为所述人脸上皱纹所在区域;所述处理单元还用于对所述至少两张不同尺寸的ROI图像中每张ROI图像进行处理,得到至少两张黑白图像;其中,每张黑白图像中的白色区域为皱纹可疑出现区域;所述处理单元还用于将所述至少两张黑白图像融合,得到最终图像;所述最终图像上的白色区域被识别为皱纹。
这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
当该终端设备是图1所示的手机100时,处理单元可以是图1所示的处理器110,或者是图2所示的应用处理器110-1、或其他处理器。图像采集单元可以是图1或图2所示的摄像头193,也可以是与终端设备连接的其它图像采集单元。
本申请实施例还提供一种计算机可读存储介质,该存储介质可以包括存储器,该存储器可存储有程序,该程序被执行时,使得电子设备执行包括如前的图3-图4所示的方法实施例中记载的全部步骤。
本申请实施例还提供一种包含计算机程序产品,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行包括如前的图3-图4所示的方法实施例中记载的全部步骤。
需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。本发明实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。例如,上述实施例中,第一获取单元和第二获取单元可以是同一个单元,也不同的单元。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储 介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
为了解释的目的,前面的描述是通过参考具体实施例来进行描述的。然而,上面的示例性的讨论并非意图是详尽的,也并非意图要将本申请限制到所公开的精确形式。根据以上教导内容,很多修改形式和变型形式都是可能的。选择和描述实施例是为了充分阐明本申请的原理及其实际应用,以由此使得本领域的其他技术人员能够充分利用具有适合于所构想的特定用途的各种修改的本申请以及各种实施例。
上述本申请提供的实施例中,从终端设备作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,终端设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。

Claims (15)

  1. 一种皱纹检测方法,其特征在于,所述方法包括:
    终端设备获取原始图像,所述原始图像中包括人脸;
    所述终端设备调整所述原始图像上的ROI区域的尺寸,得到至少两张不同尺寸的ROI图像;其中,所述ROI区域为所述人脸上皱纹所在区域;
    所述终端设备对所述至少两张不同尺寸的ROI图像中每张ROI图像进行处理,得到至少两张黑白图像;其中,每张黑白图像中的白色区域为皱纹可疑出现区域;
    所述终端设备将所述至少两张黑白图像融合,得到最终图像;所述最终图像上的白色区域被识别为皱纹。
  2. 如权利要求1所述的方法,其特征在于,所述终端设备对所述至少两张不同尺寸的ROI图像中每张ROI图像进行处理,得到至少两张黑白图像,包括:
    针对每张ROI图像,分别使用预设的至少一个矩阵重复执行如下步骤:
    所述终端设备使用预设矩阵覆盖在所述ROI图像上,确定所述ROI图像上与预设矩阵中的每个矩阵元素对应的像素点的像素值;
    所述终端设备确定所述每个矩阵元素和与所述每个矩阵元素对应的像素点的像素值的乘积;
    所述终端设备将每个矩阵元素对应的乘积求和;所述和为所述矩阵在所述ROI图像上覆盖的图像块的中心位置的像素值;
    若所述图像块的中心位置的像素值大于预设像素值,所述终端设备将所述中心位置设置为黑色,若所述图像块的中心位置的像素值小于等于所述预设像素值,所述终端设备将所述中心位置设置为白色。
  3. 如权利要求1或2所述的方法,其特征在于,在所述终端设备将所述至少两张黑白图像融合,得到最终图像之前,还包括:
    所述终端设备确定所述至少两张黑白图像中有M张图像在同一位置处存在白色区域,则删除所述M张图像中位于所述位置处的白色区域;其中,M小于等于预设值。
  4. 如权利要求1-3任一所述的方法,其特征在于,若所述皱纹为法令纹,所述最终图像上的白色区域被识别为皱纹之后,还包括:
    所述终端设备确定所述最终图像上胡须所在区域;
    所述终端设备确定与所述胡须所在区域相交的n个白色区域;
    所述终端设备确定所述n个白色区域中的第一白色区域位于所述胡须所在区域内的像素点个数和所述第一白色区域中所有像素点个数的比值;
    若所述比值大于等于预设比值,所述终端设备从所述最终图像中删除所述第一白色区域,所述最终图像上剩余的白色区域被识别为法令纹。
  5. 如权利要求1-4任一所述的方法,其特征在于,若所述皱纹为法令纹,所述最终图像上的白色区域被识别为皱纹之后,还包括:
    所述终端设备确定所述最终图像中的鼻翼所在坐标位置;
    所述终端设备在所述最终图像中删除距离所述坐标位置预设距离范围内、且长度大于预设长度的白色区域,所述最终图像上剩余的白色区域被识别为法令纹。
  6. 如权利要求4所述的方法,其特征在于,所述预设比值为1-n/m,其中,m为预设的固定值。
  7. 如权利要求1-6任一所述的方法,其特征在于,在所述终端设备调整所述ROI图像的尺寸,得到至少两张不同尺寸的ROI图像之前,所述方法还包括:
    所述终端设备将所述ROI图像转换成灰度图像;
    所述终端设备将所述灰度图像作水平调整;
    所述终端设备对水平调整后的图像作去燥处理。
  8. 如权利要求1-7任一所述的方法,其特征在于,所述最终图像上的白色区域被识别为皱纹之后,还包括:
    所述终端设备基于如下公式确定所述白色区域的评价结果y;
    y=w1*x1+w2*x2+w3*x3+w4*x4+w5*x5+w6*x6+b
    其中,x1代表所述白色区域的平均宽度;x2代表所述白色区域的平均长度;x3代表所述白色区域的平均内外颜色对比度;x4代表所述白色区域的像素点个数占所述ROI图像像素点总数的比值;x5、x6分别代表所述白色区域中最长的白色区域的长度和宽度;b代表偏移量。
  9. 如权利要求1-8任一所述的方法,其特征在于,在终端设备获取原始图像之前,所述方法还包括:
    所述终端设备检测到第一操作,运行第一应用,打开摄像头,显示取景界面;
    所述终端设备在所述最终图像中识别出皱纹之后,所述方法还包括:
    在所述取景界面中显示提示信息,所述提示信息用于提示所述皱纹在人脸中的位置。
  10. 如权利要求1-9任一所述的方法,其特征在于,在终端设备获取原始图像之前,所述方法还包括:
    所述终端设备处于锁屏状态;
    所述终端设备在所述最终图像中识别出皱纹之后,所述方法还包括:
    所述终端设备将所述皱纹与预存的图像中的皱纹比较;
    若一致,所述终端设备解锁屏幕。
  11. 如权利要求1-9任一所述的方法,其特征在于,在终端设备获取原始图像之前,所述方法还包括:
    所述终端设备显示支付验证界面;
    所述终端设备在所述最终图像中识别出皱纹之后,所述方法还包括:
    所述终端设备将所述皱纹与预存的图像中的皱纹比较;
    若一致,所述终端设备执行支付流程。
  12. 如权利要求1-11任一所述的方法,其特征在于,所述方法还包括:
    当所述终端设备未检测到皱纹时,输出提示信息,以提示用户未检测到皱纹。
  13. 一种终端设备,其特征在于,包括摄像头、处理器和存储器;
    所述摄像头:用于采集原始图像,所述原始图像中包括人脸;
    所述存储器用于存储一个或多个计算机程序;当所述存储器存储的一个或多个计算机程序被所述处理器执行时,使得所述终端设备能够实现如权利要求1-12任一所述 的方法。
  14. 一种计算机存储介质,其特征在于,所述计算机可读存储介质包括计算机程序,当计算机程序在终端设备上运行时,使得所述终端执行如权利要求1至12任一所述的方法。
  15. 一种包含指令的计算机程序产品,其特征在于,包括指令,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1-12任一项所述的方法。
PCT/CN2019/117904 2018-11-19 2019-11-13 一种皱纹检测方法和终端设备 WO2020103732A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19888055.1A EP3872753B1 (en) 2018-11-19 2019-11-13 Wrinkle detection method and terminal device
US17/295,230 US11978231B2 (en) 2018-11-19 2019-11-13 Wrinkle detection method and terminal device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811375119.1A CN111199171B (zh) 2018-11-19 2018-11-19 一种皱纹检测方法和终端设备
CN201811375119.1 2018-11-19

Publications (1)

Publication Number Publication Date
WO2020103732A1 true WO2020103732A1 (zh) 2020-05-28

Family

ID=70743732

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117904 WO2020103732A1 (zh) 2018-11-19 2019-11-13 一种皱纹检测方法和终端设备

Country Status (4)

Country Link
US (1) US11978231B2 (zh)
EP (1) EP3872753B1 (zh)
CN (1) CN111199171B (zh)
WO (1) WO2020103732A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215802B (zh) * 2018-07-16 2022-04-08 荣耀终端有限公司 一种皮肤检测方法和电子设备
CN111199171B (zh) 2018-11-19 2022-09-23 荣耀终端有限公司 一种皱纹检测方法和终端设备
CN111767846A (zh) 2020-06-29 2020-10-13 北京百度网讯科技有限公司 图像识别方法、装置、设备和计算机存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129698A (zh) * 2011-03-08 2011-07-20 华中科技大学 一种基于感兴趣区域的图像编码方法
US8144997B1 (en) * 2006-12-21 2012-03-27 Marvell International Ltd. Method for enhanced image decoding
CN103827916A (zh) * 2011-09-22 2014-05-28 富士胶片株式会社 皱纹检测方法、皱纹检测装置和皱纹检测程序以及皱纹评估方法、皱纹评估装置和皱纹评估程序
CN105184216A (zh) * 2015-07-24 2015-12-23 山东大学 一种心二区掌纹的数字提取方法
CN105310690A (zh) * 2014-06-09 2016-02-10 松下知识产权经营株式会社 皱纹检测装置和皱纹检测方法

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3905503B2 (ja) * 2003-09-12 2007-04-18 株式会社国際電気通信基礎技術研究所 顔画像合成装置および顔画像合成プログラム
WO2011103576A1 (en) * 2010-02-22 2011-08-25 Canfield Scientific, Incorporated Reflectance imaging and analysis for evaluating tissue pigmentation
WO2015015793A1 (ja) * 2013-07-31 2015-02-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 肌分析方法、肌分析装置および肌分析装置の制御方法
CN105705654A (zh) * 2013-09-25 2016-06-22 宝洁公司 用于皮肤护理咨询的方法和系统
US9292756B2 (en) * 2013-12-10 2016-03-22 Dropbox, Inc. Systems and methods for automated image cropping
FR3014675A1 (fr) * 2013-12-12 2015-06-19 Oreal Procede d'evaluation d'au moins un signe clinique du visage
WO2015165989A2 (en) * 2014-05-02 2015-11-05 Carl Zeiss Meditec, Inc. Enhanced vessel characterization in optical coherence tomograogphy angiography
US10262190B2 (en) * 2015-03-26 2019-04-16 Beijing Kuangshi Technology Co., Ltd. Method, system, and computer program product for recognizing face
US11134848B2 (en) * 2016-04-25 2021-10-05 Samsung Electronics Co., Ltd. Mobile hyperspectral camera system and human skin monitoring using a mobile hyperspectral camera system
WO2018106213A1 (en) * 2016-12-05 2018-06-14 Google Llc Method for converting landscape video to portrait mobile layout
CN107330370B (zh) * 2017-06-02 2020-06-19 广州视源电子科技股份有限公司 一种额头皱纹动作检测方法和装置及活体识别方法和系统
WO2019014814A1 (zh) * 2017-07-17 2019-01-24 深圳和而泰智能控制股份有限公司 一种定量检测人脸抬头纹的方法和智能终端
EP3664016B1 (en) * 2017-08-24 2022-06-22 Huawei Technologies Co., Ltd. Image detection method and apparatus, and terminal
CN109493310A (zh) * 2017-09-08 2019-03-19 丽宝大数据股份有限公司 身体信息分析装置及其手部肌肤分析方法
CN108324247B (zh) * 2018-01-29 2021-08-10 杭州美界科技有限公司 一种指定位置皮肤皱纹评估方法及系统
WO2020015149A1 (zh) 2018-07-16 2020-01-23 华为技术有限公司 一种皱纹检测方法及电子设备
US11461592B2 (en) * 2018-08-10 2022-10-04 University Of Connecticut Methods and systems for object recognition in low illumination conditions
CN111199171B (zh) 2018-11-19 2022-09-23 荣耀终端有限公司 一种皱纹检测方法和终端设备
CN109793498B (zh) * 2018-12-26 2020-10-27 华为终端有限公司 一种皮肤检测方法及电子设备
US11521334B2 (en) * 2020-04-01 2022-12-06 Snap Inc. Augmented reality experiences of color palettes in a messaging system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8144997B1 (en) * 2006-12-21 2012-03-27 Marvell International Ltd. Method for enhanced image decoding
CN102129698A (zh) * 2011-03-08 2011-07-20 华中科技大学 一种基于感兴趣区域的图像编码方法
CN103827916A (zh) * 2011-09-22 2014-05-28 富士胶片株式会社 皱纹检测方法、皱纹检测装置和皱纹检测程序以及皱纹评估方法、皱纹评估装置和皱纹评估程序
CN105310690A (zh) * 2014-06-09 2016-02-10 松下知识产权经营株式会社 皱纹检测装置和皱纹检测方法
CN105184216A (zh) * 2015-07-24 2015-12-23 山东大学 一种心二区掌纹的数字提取方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3872753A4

Also Published As

Publication number Publication date
US11978231B2 (en) 2024-05-07
US20210390688A1 (en) 2021-12-16
EP3872753A1 (en) 2021-09-01
CN111199171A (zh) 2020-05-26
EP3872753B1 (en) 2023-06-21
EP3872753A4 (en) 2022-01-26
CN111199171B (zh) 2022-09-23

Similar Documents

Publication Publication Date Title
US10956714B2 (en) Method and apparatus for detecting living body, electronic device, and storage medium
CN107105130B (zh) 电子设备及其操作方法
EP3163498B1 (en) Alarming method and device
CN112037162B (zh) 一种面部痤疮的检测方法及设备
WO2021078001A1 (zh) 一种图像增强方法及装置
WO2020103732A1 (zh) 一种皱纹检测方法和终端设备
WO2021036853A1 (zh) 一种图像处理方法及电子设备
US20200311888A1 (en) Image processing method, image processing apparatus, and wearable device
US20170339287A1 (en) Image transmission method and apparatus
CN114946169A (zh) 一种图像获取方法以及装置
EP4361954A1 (en) Object reconstruction method and related device
CN112446252A (zh) 图像识别方法及电子设备
CN112087649B (zh) 一种设备搜寻方法以及电子设备
JP2023510375A (ja) 画像処理方法、装置、電子デバイス及び記憶媒体
WO2023030398A1 (zh) 图像处理方法及电子设备
WO2020015149A1 (zh) 一种皱纹检测方法及电子设备
CN113711123A (zh) 一种对焦方法、装置及电子设备
WO2020015145A1 (zh) 一种检测眼睛睁闭状态的方法及电子设备
CN114973347B (zh) 一种活体检测方法、装置及设备
WO2021218695A1 (zh) 一种基于单目摄像头的活体检测方法、设备和可读存储介质
US20210232853A1 (en) Object Recognition Method and Terminal Device
CN113591514B (zh) 指纹活体检测方法、设备及存储介质
CN107194363B (zh) 图像饱和度处理方法、装置、存储介质及计算机设备
CN116109828B (zh) 图像处理方法和电子设备
CN114827445B (zh) 图像处理方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19888055

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019888055

Country of ref document: EP

Effective date: 20210528