WO2022184084A1 - 一种皮肤检测方法和电子设备 - Google Patents

一种皮肤检测方法和电子设备 Download PDF

Info

Publication number
WO2022184084A1
WO2022184084A1 PCT/CN2022/078741 CN2022078741W WO2022184084A1 WO 2022184084 A1 WO2022184084 A1 WO 2022184084A1 CN 2022078741 W CN2022078741 W CN 2022078741W WO 2022184084 A1 WO2022184084 A1 WO 2022184084A1
Authority
WO
WIPO (PCT)
Prior art keywords
skin
image
initial
partial
testing
Prior art date
Application number
PCT/CN2022/078741
Other languages
English (en)
French (fr)
Inventor
丁欣
胡宏伟
郜文美
卢曰万
姜永涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022184084A1 publication Critical patent/WO2022184084A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G1/00Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
    • A47G1/02Mirrors used as equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to a skin detection method and an electronic device.
  • Skin detection technology is a technology that can meet people's needs for beauty.
  • Skin detection technology refers to a technology that uses sensors to collect facial data or shoot a user's facial image, and then performs skin measurement analysis based on the facial data or facial image to obtain detection results about various skin features.
  • the skin detection technology can be used to obtain detection results of skin features such as facial pores, pigmentation, acne, fine lines, etc., and can provide users with corresponding skin care suggestions based on the skin detection results, so as to help users better skin care and improve skin condition .
  • a commonly used skin detection method is to collect a user's facial image through an electronic device, such as a mobile phone, and then analyze the collected facial image according to a preset algorithm in the electronic device to obtain a skin test result.
  • the user In the process of collecting the user's facial image, in order to obtain a high-quality facial image, the user needs to cooperate strictly in the process of collecting the facial image, for example, make the user's face and the camera of the electronic device in a suitable range of position, distance or angle It may also be necessary to ensure that the surrounding light conditions meet the preset requirements. Even if the collected facial image does not conform to the preset algorithm, the user needs to cooperate and take a photo again. Therefore, the user interaction process of the existing skin detection process is complicated, so that the user experience is poor.
  • Embodiments of the present application provide a skin detection method and an electronic device.
  • the electronic device can actively acquire multiple initial images including a user's face image within a preset time period, and select a skin detection image from the multiple initial images.
  • the skin test image is subjected to skin test analysis, and then the skin test result is displayed.
  • the electronic device can obtain the initial image without the user's perception, select the appropriate skin test image from it, and perform the skin test analysis.
  • the interaction between the user and the electronic device during the skin test can be reduced, so that the user can obtain the skin test result more conveniently and improve the user experience.
  • the embodiments of the present application provide a skin detection method.
  • the method includes: the electronic device automatically acquires N initial images of the user, where N is an integer greater than 1, and the initial images include face images; and screening out skin-testing images from the N initial images, where the skin-testing images are the part of the face images.
  • the initial image whose features meet the skin testing conditions.
  • the features of the face image include one or more of: pitch attitude, angle, brightness, occlusion, proportion in the initial image, expression, and clarity; based on the skin testing image, display Skin test results.
  • the electronic device can actively acquire the initial image, screen out the skin-testing images that meet the skin-testing conditions, and then perform skin-testing analysis based on the skin-testing images.
  • the user does not need to actively initiate the skin testing, nor does it require strict cooperation during the skin testing process, so that the interaction between the user and the electronic device during the skin testing process can be reduced, and the user can obtain the skin testing result more conveniently.
  • the electronic device actively obtains the initial image, selects the skin test image and conducts the official analysis, which can obtain more intensive skin test results, so as to effectively track and monitor the user's skin condition and provide users with better skin care suggestions.
  • the skin-measuring image includes a first skin-measuring image and a second skin-measuring image; screening out the skin-measuring images from the N initial images, including: for each initial image in the N initial images: If the first feature of the face image in the initial image meets the skin test requirements, and the second feature of the face image in the initial image meets the skin test requirements, the initial image is used as the first skin test image; if the face image in the initial image meets the skin test requirements The first feature of the face image meets the skin testing requirements, and the second feature of the first partial image of the face image in the initial image meets the skin testing requirements, and the second feature of the second partial image of the face image in the initial image does not meet the skin testing requirements.
  • the initial image is used as the second skin test image;
  • the first partial image and the second partial image are different partial images in the initial image;
  • the first feature includes: pitch attitude, angle, expression, the proportion in the initial image One or more of ;
  • the second feature includes: one or more of brightness, occlusion, and clarity.
  • the skin measuring image may include a first skin measuring image and a second skin measuring image.
  • the first skin measurement image is an initial image in which the whole face image meets the skin measurement requirements
  • the second skin measurement image is an initial image in which a partial image in the face image meets the skin measurement requirements.
  • the skin measuring image includes a second skin measuring image
  • the second skin measuring image corresponds to a first label
  • the first label is used to indicate that the first partial image in the second skin measuring image is used for skin measuring Analyzing
  • displaying the skin measurement result based on the skin measurement image including: acquiring the local skin measurement result of the first partial image based on the first label corresponding to the second skin measurement image; and displaying the skin measurement result based on the local skin measurement result.
  • the electronic device obtains the skin-testing result based on the partial image that meets the skin-testing requirements.
  • displaying the skin test result based on the partial skin test result includes: if the partial skin test result includes the partial skin test result of all partial images in the face image, then based on the partial skin test result, displaying the skin test result. skin result; if the partial skin measurement result includes the partial skin measurement result of some partial images in the face image, the skin measurement result is displayed based on the local skin measurement result and the historical skin measurement result.
  • the electronic device can display the skin testing results in combination with the historical skin testing results.
  • the skin measuring image includes a first skin measuring image
  • the first skin measuring image corresponds to a second label
  • the second label is used to indicate that the entire face image in the first skin measuring image is used for skin measuring Analyzing
  • displaying the skin measurement result based on the skin measurement image including: acquiring the skin measurement result of the face image of the first skin measurement image based on the second label corresponding to the first skin measurement image; and displaying the skin measurement result.
  • the electronic device obtains the skin measuring result based on the whole face image.
  • the method further includes: if no skin testing images are selected from the N initial images, displaying the skin testing results based on historical skin testing results.
  • the current skin test result can be predicted based on the historical skin test results.
  • automatically acquiring N initial images of the user includes: turning on the camera within a preset time period and capturing a preview image; when a face image is detected in the preview image, using the camera to capture N images Initial image.
  • the electronic device can actively capture an initial image within a preset time period, thereby reducing the interaction between the user and the electronic device and improving the user experience.
  • N initial images when a face image is detected in the preview image, shooting to obtain N initial images, including: when a face image is detected in the preview image, adjusting the position of the eyes in the face image according to the position of the eyes The focus point of the camera, and N initial images are obtained by using the camera after adjusting the focus point to shoot.
  • the method further includes: if the skin test result has been obtained within a preset time period, not turning on the camera.
  • the electronic device does not need to perform skin detection repeatedly, which can save computing power and storage space.
  • the electronic device is a smart mirror.
  • the electronic device can be a smart mirror, so as to make better use of the user's living habits, intensively obtain skin test results, and more effectively detect the user's skin condition, giving Come up with better skincare advice.
  • the method further includes: outputting prompt information, where the prompt information is used to prompt the user of the skin testing process.
  • an embodiment of the present application provides an electronic device, including: a camera; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, and one The or more computer programs include instructions that, when executed by the electronic device, cause the electronic device to execute the skin detection method provided by the embodiments of the present application.
  • the embodiments of the present application provide a computer-readable storage medium, including computer instructions, when the computer instructions are executed on the computer, the computer enables the computer to execute the skin detection method provided by the embodiments of the present application.
  • the embodiments of the present application provide a computer program product, which, when the computer program product runs on a computer, enables the computer to execute the skin detection method provided by the embodiments of the present application.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a smart mirror provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a skin detection method provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of an image acquisition process provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a partial image of a face image provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a skin test analysis process provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a region of interest for skin detection according to an embodiment of the present application.
  • Fig. 9 is a kind of interface diagram that the embodiment of this application provides.
  • FIG. 10 is a scene diagram provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • users can perform skin detection in a number of ways. For example, users can perform skin detection through a professional skin detector, or through a skin detection application (APP) in a mobile phone.
  • APP skin detection application
  • a user can perform skin detection through a professional skin detector (eg, a visa skin detector).
  • a professional skin detector eg, a visa skin detector.
  • the skin detector uses a preset algorithm to analyze the captured facial images, and displays the skin detection results on the computer connected to the skin detector.
  • the user can also perform skin detection through a mobile phone or a smart mirror.
  • the mobile phone usually performs skin detection through an installed skin test APP (such as Huawei's "Love Skin” APP).
  • the user needs to perform the photographing process according to the instructions of the skin-testing APP, for example, move the mobile phone or face according to the APP voice, text or icon and other instructions, so that the user's face is in the appropriate position, distance, angle of the camera of the mobile phone, and the lighting Suitable.
  • the mobile phone collects the user's facial image, and uses the algorithm preset in the skin testing APP to analyze the captured facial image, and displays the skin detection result.
  • users need to cooperate strictly in the entire skin testing process. For example, the user needs to fix the head in a suitable posture, distance, light and other conditions; or, the user needs to move the mobile phone or face according to the APP prompt.
  • the quality of the captured image does not meet the requirements of the preset algorithm, the user also needs to cooperate with re-collecting the facial image.
  • the interaction process between users and electronic devices such as skin detectors or mobile phones is complicated and cumbersome, and the user experience is poor.
  • the electronic device can start skin testing only when the user initiates the skin testing process, for example, when the skin testing APP is opened or the skin testing mode of the skin detector is turned on. In this way, if the number of times the user initiates the skin test is small, the results of the skin test are few and the skin test cycle is long, so effective skin condition tracking and monitoring cannot be achieved, and thus the user cannot be provided with perfect skin care suggestions.
  • the embodiment of the present application provides a skin detection method, which can be applied to an electronic device, and can actively acquire multiple initial images including a user's face image within a preset time period without the need for strict cooperation and active initiation by the user.
  • the skin test image is automatically screened from multiple initial images, and the skin test image is analyzed based on the skin test image, and then the skin test result is displayed. Since the entire skin testing process does not require the user's active initiation and strict cooperation, the skin testing process can be performed without the user's perception, thereby improving the user experience.
  • the electronic device can actively select the skin test images that meet the skin test requirements, and display the final skin test results and skin care suggestions based on the skin test images and/or historical skin test results. The accuracy and stability of the skin test results are ensured.
  • the electronic device in the embodiment of the present application may be a smart mirror, a mobile phone, a tablet computer, a wearable device (such as a smart watch), a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) device , notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA) and other mobile terminals, can also be professional cameras and other equipment, the embodiment of the present application for electronic equipment There is no restriction on the specific type.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, etc.
  • USB universal serial bus
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a central processing unit (CPU), a modem processor, a graphics processor (graphics processor) processing unit (GPU), image signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural network processing device (neural-network processing unit, NPU), etc.
  • application processor application processor
  • CPU central processing unit
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec video codec
  • DSP digital signal processor
  • NPU neural network processing unit
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • emitting diode, AMOLED organic light-emitting diode
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the display screen 194 may be used to display prompt information and skin test results.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, converting it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • the camera 193 is used to capture images or videos.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, historical skin test results, etc.) created during the use of the electronic device 100, and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 can screen out the skin test image from the collected initial images by running the instructions stored in the internal memory 121, and display the final skin test image to the user based on the skin test image and/or the historical skin test results. skin test results.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect whether the electronic device 100 is blocked, and if there is a block, it can further determine whether the block is the user's face.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a touch screen.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the camera 193 can collect a plurality of initial images within a preset time period; the processor 110, by running the instructions stored in the internal memory 121, filters out the collected initial images that meet the skin testing requirements The skin test image is obtained, and then skin test information is performed based on the skin test image and the stored historical skin test results.
  • the final display screen 194 displays the obtained skin test results and skin care recommendations.
  • FIG. 2 shows a schematic structural diagram of a smart mirror having the structure shown in FIG. 1 . Since users usually use mirrors for skin care and other activities in the morning and evening, smart mirrors can capture images of the user's face during daily activities such as skin care.
  • the smart mirror 200 may include a camera 201, a mirror surface 202, a display screen 203, a proximity light sensor 204, a bracket 205, a switch button 206, a base 207, an LED ring light strip 208, and the like.
  • the camera 201 may be the camera 193 in FIG. 1
  • the display screen 203 may be the display screen 194 in FIG. 1
  • the proximity light sensor 204 may be the proximity light sensor 180G in FIG. 1 .
  • the mirror surface 202 can be made of flat glass, mirror surface stainless steel plate, aluminum, etc., and is used to display an image of an object or a user in front of the mirror surface 202 .
  • the embodiment of the present application does not limit the material and shape of the mirror surface 202 .
  • the display screen 203 can be embedded in any position of the mirror surface 202 , and can also be arranged outside the mirror surface 202 . If the display screen 203 is embedded in the mirror surface 202 , normally the display screen 203 may not display any information, as a part of the mirror surface 202 , it is convenient for the user to view the face image in the mirror surface 202 .
  • the display screen 203 may display the obtained skin test results in response to the user's instruction or after the skin test is completed.
  • the proximity light sensor 204 may be disposed on the mirror surface 202 or outside the mirror surface 202 , which is not limited in this embodiment of the present application.
  • the switch button 206 may be a physical button or a touch button, and the specific position of the switch button 206 is not limited in this application.
  • the bracket 205 and the base 207 are used to support the mirror surface 202 , and the shapes and materials of the bracket 205 and the base 206 are not limited in this embodiment of the present application. In some embodiments, the stand 205 and the base 206 can also be folded.
  • the LED annular light strip 208 surrounds the outside of the mirror surface 202, and is used to supplement the light, so that the user can better view the face image presented in the mirror surface in the process of dressing or skin care.
  • the switch button 206 can be used to control the LED ring light strip on or off.
  • the smart mirror may also not include the LED ring light strip 208 .
  • the switch button 206 may not be included in the smart mirror, and the smart mirror performs the function of the switch button in response to the user's voice instruction.
  • the smart mirror 200 provided in this embodiment of the present application is based on the structure of the electronic device 100 shown in FIG. 1 .
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the smart mirror 200 .
  • the method may include:
  • the smart mirror automatically acquires N initial images of the user.
  • the smart mirror can turn on the camera within a preset time period, and then automatically acquire N initial images of the user.
  • the smart mirror does not require the user to actively initiate and cooperate strictly when acquiring the user's initial image.
  • the smart mirror can automatically acquire the user's initial image through the camera, so that the user can The initial image is acquired without perception, and the subsequent skin test analysis is performed.
  • no perception in the embodiment of this application means that the user does not need to actively initiate a skin test and strictly cooperate in the skin test process, and the user can carry out daily activities (such as daily skin care), so that the smart mirror can be easily Get N initial images of the user.
  • N is an integer greater than 1.
  • N can be 10.
  • the smart mirror may pre-set N. In other embodiments, the user can set N according to requirements. This embodiment of the present application does not limit this.
  • the smart mirror automatically acquires N initial images of the user, including the following steps:
  • the smart mirror acquires the preset time period and the current system time, and determines whether the current system time is within the preset time.
  • the user may set a preset time period according to his daily habit of using the mirror, such as skin care and makeup. For example, if the user usually takes skin care between 8:00-10:00 in the morning and 8:00-10:00 in the evening, the preset time periods may be set as 8:00-10:00 and 20:00-22:00 in advance. In this way, the smart mirror automatically acquires the user's face image and performs skin detection within a preset time period set by the user.
  • the user may also set the detection period according to his habit of using mirrors for daily skin care, dressing, and the like. For example, if the user usually performs skin care in the morning and evening, the detection cycle can be set to be detected every 12 hours. In this way, the smart mirror can periodically acquire the user's face image and perform skin detection according to the set detection period.
  • This embodiment of the present application does not limit the preset time period or the specific time length of the detection period, and the user can set or adjust it according to requirements.
  • the smart mirror may also acquire the current system time according to its own clock, and determine whether the initial image should be acquired currently based on the current system time.
  • the smart mirror periodically obtains the current system time.
  • the smart mirror After the smart mirror obtains the preset time period and the current system time, the smart mirror can determine whether the current system time is within the preset time period by comparing the current system time with the set preset time period.
  • the smart mirror executes the following step 402 .
  • the smart mirror may not turn on the camera. Since the smart mirror may acquire the current system time periodically, in some embodiments, the smart mirror may perform the following step 402 when the current system time acquired later is within a preset time period.
  • the smart mirror determines whether a skin test result within a preset time period already exists.
  • the smart mirror may further determine whether there is a skin test result within the preset time period. It can also be considered that the smart mirror can determine whether a skin test has been performed within a preset time period.
  • the smart mirror executes the following step 403 to perform the subsequent process.
  • the smart mirror will not repeat the skin test. In other words, the smart mirror does not need to turn on the camera.
  • the smart mirror performs skin detection at most once, so as to avoid repeated skin detection within the preset time period to save computing power and storage space.
  • the smart mirror turns on the camera, collects a preview image, and determines whether there is a face image in the preview image.
  • the smart mirror determines that the skin detection has not been performed within the preset time period, the smart mirror can further determine whether there is occlusion.
  • the smart mirror determines if there is occlusion through a proximity light sensor. For example, a smart mirror can determine whether there is occlusion within 1s by reading data from a proximity light sensor.
  • the camera does not need to be turned on, and the process ends.
  • the smart mirror detects occlusion, it means that there is a person or object in front of the smart mirror within a preset time period. At this time, the smart mirror can turn on the camera to collect the preview image, and then can determine whether there is a face image based on the collected preview image.
  • the smart mirror can detect whether there is a face image in the preview image by using an existing neural network model for face detection.
  • the embodiments of the present application do not limit the neural network model for face detection.
  • the smart mirror may also determine whether the user is within a preset distance of the smart mirror based on the preview image. For example, the smart mirror can determine the distance between the user corresponding to the face and the smart mirror according to the proportion of the face image in the preview image.
  • the distance between the user and the smart mirror exceeds the preset distance, it means that the face image in the preview image may be too small and is not suitable for subsequent skin measurement analysis.
  • the smart mirror can directly turn on the camera to collect a preview image without prompting the user. That is to say, the smart mirror can actively collect preview images within a preset time period.
  • the smart mirror adjusts the focus point of the camera and shoots N initial images.
  • the smart mirror detects a face image in the preview image, it adjusts the focus point of the camera according to the position of the eyes in the face image, and uses the camera after adjusting the focus point to shoot N initial images.
  • the initial image since the initial image is captured when there is a face image in the preview image, the initial image includes the face image.
  • the smart mirror determines that the distance between the user and the smart mirror is within a preset distance, the smart mirror adjusts the focus point of the camera to capture an initial image. In this way, it can be ensured that the initial image is taken only when the user is close to the smart mirror, and the initial image is taken when the distance between the user and the smart mirror is far away, resulting in that the face in the initial image is too small for subsequent skin testing. The problem.
  • the user can set the photographing duration according to requirements.
  • the photographing duration may be 5s. If the photographing duration exceeds the photographing duration after the photographing is completed, the camera is turned off; if the photographing duration is not exceeded, it is continued to determine whether N initial images have been photographed. If N initial images have been taken, the camera will be turned off; if N initial images have not been taken, the smart mirror will re-collect the preview image to determine whether there is a face image in the preview image.
  • the camera is turned off after the smart mirror captures N initial images.
  • the smart mirror does not detect a face image in the preview image, it can indicate that the occlusion is not caused by the user facing the mirror for skin care or makeup.
  • the smart mirror if the camera of the smart mirror continuously detects the face image in the preview images collected within the preset time according to the preset frequency, it means that the user is not currently facing the mirror for skin care or dressing activities. , the smart mirror turns off the camera.
  • the above steps 401 to 405 may all be completed in the background without presenting the processing process to the user.
  • the smart mirror when the smart mirror executes the above steps 401 to 405, the user may be prompted with the currently executed content. For example, in the process of acquiring the initial image, the smart mirror can present the currently executed content to the user through screen display, voice prompts, or signal lights. For example, “face image detected”, “first image has been taken”, or “N images have been taken”, etc.
  • the smart mirror can acquire N initial images including face images.
  • Smart mirrors can automatically capture initial images during the user's normal daily activities such as skin care or makeup, facing the mirror.
  • the user does not need to actively initiate the process of collecting the image, nor does it need to adjust the sitting posture, light, and direction to cooperate with the smart mirror to collect the initial image.
  • the initial image is collected, so that the initial image can be collected conveniently and user experience is improved.
  • the smart mirror After acquiring the N initial images of the user, the smart mirror can perform image screening, and execute the content of step 302 below.
  • the smart mirror selects a skin-measuring image from the N initial images, where the skin-measuring image is an initial image in which the features of the face image satisfy the skin-measuring condition.
  • the smart mirror After the smart mirror acquires N initial images of the user, the initial images that meet the skin testing conditions can be selected from them as skin testing images, and then skin testing analysis is performed according to the skin testing images.
  • the smart mirror automatically filters the skin test images from the N initial images.
  • the skin measuring image may include a first skin measuring image or a second skin measuring image. The whole face image in the first skin measurement image is used for skin measurement analysis, and the partial image in the face image in the second skin measurement image is used for skin measurement analysis.
  • the smart mirror filters out the skin-testing images from N initial images, which may include the following steps:
  • the smart mirror determines whether the first feature of the face image in each initial image meets the requirements for skin testing.
  • the smart mirror can cache the acquired initial images, and after acquiring the preset N initial images, the smart mirror can read the initial images one by one, and determine the face in each initial image Whether the first feature of the image meets the skin testing requirements.
  • the smart mirror when acquiring each initial image, may directly determine whether the first feature of the acquired face image in the initial image meets the requirements for skin testing.
  • the smart mirror determines that the first feature of the face image in an initial image meets the requirements for skin testing, the smart mirror executes the following step 502 .
  • the process ends.
  • the smart mirror may also record that the initial image does not meet the skin testing requirements.
  • the first feature includes: one or more of features such as pitch attitude, angle, expression, and proportion in the initial image.
  • the pitch attitude refers to a state in which the user bows his head or raises his head. It can be considered that judging whether the pitch attitude meets the requirements of skin measurement refers to judging whether the angle of the user's head bowing or raising his head is within an appropriate range.
  • the pitching posture can be judged according to the ratio of the three chambers and the five eyes in the face image.
  • the angle refers to the angle at which the user's face is tilted left and right. It can be considered that judging whether the angle meets the requirements for skin measurement refers to judging whether the angle of the left or right tilt of the user's head is within an appropriate range.
  • the angle may be determined according to the left-right symmetry of the face image.
  • the expression refers to the user's expression in the face image. It is understandable that if there are some expressions on the user's face, the face image may not reflect the real state of the user's skin. For example, when the user frowns, fine lines in the face image increase, which interferes with subsequent skin measurement analysis.
  • expressions can be determined according to a neural network model.
  • the proportion in the initial image refers to the size of the face image in the initial image. It can be considered that judging whether the proportion in the initial image meets the requirements for skin testing refers to judging whether the distance between the user and the smart mirror is within an appropriate range.
  • the size of the face image in the initial image can be calculated, and then the distance between the user and the smart mirror can be obtained.
  • the smart mirror can sequentially determine whether each feature meets the requirements for skin testing. For example, if the first feature includes pitch attitude, angle, expression, and proportion in the initial image, the smart mirror can sequentially determine whether each feature meets the skin testing requirements, and all features meet the skin testing requirements, and proceed to the next step; If one feature does not meet the skin testing requirements, it means that the initial image does not meet the skin testing requirements, stop judging whether the subsequent features meet the skin testing requirements, and directly screen out the initial image.
  • the embodiment of the present application does not limit the order in which the smart mirror sequentially determines whether each feature meets the requirements for skin testing.
  • the smart mirror can also simultaneously determine whether the multiple features meet the skin testing requirements. If one of the features does not meet the skin test requirements, the initial image is screened out.
  • the smart mirror determines whether the second feature of the face image in the initial image meets the requirements for skin testing.
  • the smart mirror can continue to judge whether the second feature of the face image in the initial image meets the requirements for skin testing.
  • the second feature of the face image in the initial image refers to the second feature of the entire face image in the initial image. That is, the second feature of the face image represents the second feature of the entire face image.
  • the second feature includes one or more of features such as brightness, occlusion, and clarity.
  • judging whether the second feature of the face image in the initial image meets the requirements for skin testing refers to judging whether the overall brightness, occlusion, clarity and other features of the face image meet the requirements for skin testing.
  • the brightness may be determined from the grayscale of the face image. Among them, the greater the gray value, the greater the brightness.
  • the occlusion may be determined according to an existing model, which will not be repeated here.
  • the sharpness may be determined from the sharpness of the face image. Among them, the larger the sharpness value, the higher the definition.
  • the smart mirror can sequentially determine whether each feature meets the requirements for skin testing.
  • the smart mirror can simultaneously determine whether each feature meets the requirements for skin testing.
  • This embodiment of the present application does not limit this, nor does it limit the order of sequentially determining whether each feature meets the requirements.
  • the smart mirror uses the initial image as the first skin testing image.
  • the whole face image in the initial image meets the requirements for skin measurement. It can be considered that the whole face image in the initial image can be used for subsequent skin measurement analysis.
  • the initial image can be used as the first skin testing image for subsequent skin testing analysis.
  • the smart mirror when the smart mirror obtains the first first skin measurement image, the first skin measurement image is directly used for subsequent skin measurement analysis, and it is no longer determined whether other initial images meet the skin measurement requirements. In this case, the speed of skin detection can be improved.
  • the smart mirror can cache the first skin test image that meets the skin test requirements, and when a new first skin test image is obtained, determine whether there is an existing first skin test image, and if so, then The smart mirror can select the image with higher score as the comprehensive score of the second feature of the face image in the cached first skin test image and the comprehensive score of the second feature of the face image in the new first skin test image.
  • the first skin test image used for skin test analysis. In this way, since the first skin test image is the image with the highest quality screened out, the accuracy of subsequent skin test analysis can be improved.
  • the skin measurement algorithm in the smart mirror may perform the skin measurement analysis based on only one first skin measurement image.
  • the first skin measurement image may be the first skin measurement image described above, or the first skin measurement image with the highest quality.
  • the smart mirror can cache all the first skin testing images that meet the skin testing requirements, and the skin testing algorithm in the smart mirror performs skin testing analysis based on all the cached first skin testing images. That is to say, the first skin measurement image may also be multiple.
  • the first skin-measuring image may also correspond to a second label (also referred to as an overall label), where the second label is used to indicate that the entire face image in the first skin-measuring image is used for skin-measuring analysis.
  • a second label also referred to as an overall label
  • the smart mirror determines whether the second feature of the partial image meets the skin testing requirement.
  • the smart mirror can determine the first feature of each partial image of the face image. Whether the two characteristics meet the skin testing requirements.
  • the smart mirror executes the following step 505 .
  • the process ends.
  • the smart mirror may also record that the initial image does not meet the skin testing requirements.
  • the face image can be divided into three regions, namely, the forehead region, the left face region and the right face region, and each region corresponds to a partial image. That is, the face image can be segmented into a forehead partial image, a left face partial image, and a right face partial image.
  • the smart mirror can judge whether the second feature of each partial image in the above three partial images meets the requirements for skin testing.
  • dividing the face image into the above-mentioned three partial images is only an example, and the embodiment of the present application does not limit the method of dividing the partial images of the face image and the number of partial images.
  • the smart mirror can sequentially determine whether the second feature of each partial image meets the requirements for skin testing.
  • the smart mirror can also simultaneously determine whether the second feature of each partial image meets the requirements for skin testing.
  • judging whether the second feature of the partial image meets the requirements for skin testing refers to judging whether the brightness, occlusion, and clarity of the partial image meet the requirements for skin testing.
  • the smart mirror can divide the face image into three partial images of the forehead partial image, the left face partial image and the right face partial image, and judge whether the brightness, occlusion, and clarity of each partial image meet the skin testing requirements.
  • the smart mirror only judges whether the second feature of the partial image of the face image meets the requirements for skin testing only when the second feature of the whole face image does not meet the requirements for skin testing.
  • the second feature of at least one partial image in the image does not meet the skin detection requirements.
  • the smart mirror uses the initial image as the second skin test image.
  • the smart mirror can use the initial image as the second skin testing image, and the partial image in the second skin testing image can be used for skin testing analyze.
  • the second skin measurement image may correspond to a first label (also referred to as a partial label), where the first label is used to indicate a partial image used for skin measurement analysis in the second skin measurement image.
  • the first label corresponding to the second skin measurement image is a forehead label
  • the partial forehead image of the face image in the second skin measurement image meets the requirements for skin measurement, and the partial image of the forehead can be used for subsequent skin measurement analysis.
  • the smart mirror when the smart mirror obtains the second skin measurement image in which the first partial image meets the skin measurement requirements, it can no longer determine whether the first partial image of the face image in the subsequent initial image meets the skin measurement requirements, and only needs to determine whether the first partial image of the face image meets the skin measurement requirements. Whether other partial images except the first partial image of the face image in the subsequent initial images meet the requirements for skin testing.
  • the smart mirror when the smart mirror obtains a second skin measurement image that meets the skin measurement requirements, the smart mirror can cache the second skin measurement image and the corresponding first label, and obtain a new skin measurement image that meets the skin measurement requirements.
  • the smart mirror can For the comprehensive score of the second feature of the first partial image, the second skin measuring image where the first partial image with a higher score is located is selected as the second skin measuring image that is finally used for skin measuring analysis. In this way, since the quality of the first partial image in the second skin measurement image used for skin measurement analysis is higher, the accuracy of subsequent skin measurement analysis can be improved.
  • the skin measurement algorithm in the smart mirror can perform skin measurement analysis based only on at most one partial image corresponding to each partial.
  • the second skin measurement image used by the smart mirror for skin measurement analysis only includes an initial image with the highest quality of the partial image of the forehead, an initial image with the highest quality of the partial image of the left face, and the partial image of the right face with the highest quality. an initial image.
  • the smart mirror can cache all initial images with one or more partial images that meet the skin testing requirements as second skin testing images, and the skin testing algorithm in the smart mirror is based on these second skin testing images.
  • the skin image and the respective corresponding first labels are subjected to skin detection analysis.
  • the above steps 501 to 505 may all be completed in the background without presenting the processing procedure to the user.
  • the smart mirror when the smart mirror executes the above steps 501 to 505, the user can be prompted with the content currently being executed. For example, in the process of screening the initial image, the smart mirror can present the currently executed content to the user through screen display, voice prompts, or signal lights. For example, “the overall image of the initial image 1 is acceptable”, “the partial image of the forehead of the initial image 2 is acceptable”, and the like.
  • the smart mirror can automatically screen out the skin test images from the acquired N initial images, and then perform subsequent skin test analysis based on the skin test images. Since the smart mirror automatically screens the initial image, it does not require the user to initiate image screening. Therefore, it can be considered that after the smart mirror automatically obtains the initial image, it automatically continues to screen out the skin test image without the user's perception, so as to use In the subsequent skin test analysis, the interaction process between the user and the smart mirror during the skin test process is reduced, and the user experience is improved.
  • the smart mirror After the smart mirror has screened out the skin test image, it can automatically perform a skin test analysis based on the skin test image to obtain a skin test result, that is, execute the content of step 303 below.
  • the smart mirror performs skin measurement analysis based on the skin measurement image, and displays the skin measurement result.
  • the smart mirror After the smart mirror filters out the skin test image from the initial image, it can automatically perform skin test analysis based on the skin test image, and display the skin test result.
  • the smart mirror if there is a first skin test image, the smart mirror performs skin test analysis based on the first skin test image to obtain a skin test result.
  • the smart mirror if the first skin measurement image does not exist but the second skin measurement image exists, the smart mirror performs skin measurement analysis based on the second skin measurement image to obtain a skin measurement result.
  • the smart mirror performs skin measurement analysis based on the first skin measurement image and the second skin measurement image, respectively, and uses the skin measurement result with a better result as the final skin measurement result.
  • the smart mirror performs skin measurement analysis based on the skin measurement image, and displays the skin measurement result, including the following steps:
  • the smart mirror determines whether there is a skin test image.
  • the smart mirror can read the skin test images one by one. During this process, the smart mirror can determine whether a skin test image is present.
  • the smart mirror determines that there is a skin test image, the smart mirror executes the following step 702 .
  • the smart mirror determines that there is no skin test image, it means that the smart mirror has not screened out a suitable skin test image from the initial image. For example, assuming that the light in the environment where the user is located is poor, so that the overall or partial brightness of the face image in the captured initial image does not meet the requirements, the initial image obtained this time cannot be used as a skin test image.
  • the smart mirror determines that there is no skin test image
  • the smart mirror can obtain a predicted skin test result based on the historical skin test result, and display the predicted skin test result.
  • the smart mirror may obtain predicted skin test results based on historical skin test results according to methods such as linear regression or history fitting.
  • the embodiments of the present application do not limit the method for the smart mirror to obtain the predicted skin test result based on the historical skin test result.
  • the historical skin test results may be stored in an internal memory or the cloud, which is not limited in this embodiment of the present application.
  • the smart mirror determines whether the skin measuring image is the first skin measuring image.
  • the smart mirror may determine whether the skin-testing image is the first skin-testing image according to whether the skin-testing image has a second label.
  • the smart mirror determines that the skin-testing image is the first skin-testing image.
  • the smart mirror can also determine that the skin-testing image is the first skin-testing image.
  • the smart mirror determines that the skin measuring image is the first skin measuring image, obtain a skin measuring result of the face image of the first skin measuring image based on the first skin measuring image.
  • the smart mirror determines that the skin measurement image is the first skin measurement image, it means that the skin measurement result can be obtained based on the entire face image in the first skin measurement image.
  • the smart mirror can extract the region of interest (ROI) of each detection item based on the face image in the first skin measurement image, and input the ROI of each detection item into the neural network model (such as classification networks) for skin detection.
  • ROI region of interest
  • FIG. 8 shows the ROI of each detection item.
  • the ROI for pore and stain detection is generally the cheek; as shown in (b) in Figure 8, the ROI for fine line detection is generally the forehead and under the eyes; (c) in Figure 8
  • the blackhead detection ROI is generally the nose; as shown in Figure 8 (d), the eye bags and dark circles detection ROI are generally around the eyes; as shown in Figure 8 (e), the acne and gloss detection ROI Usually the forehead, nose, cheeks and chin.
  • the smart mirror determines that the skin measurement image is not the first skin measurement image but the second skin measurement image, then based on the first label corresponding to the second skin measurement image, obtain the local skin measurement result of the corresponding partial image, and then obtain the measurement result. skin results.
  • the skin measuring image includes the first skin measuring image and the second skin measuring image, if there is a skin measuring image and the skin measuring image is not the first skin measuring image, it can be described that the skin measuring image is the second skin measuring image.
  • the smart mirror can determine a partial image for skin test analysis. For example, if the first label is a forehead label, it means that the partial image of the forehead of the face image in the second skin measurement image can be used for skin measurement analysis.
  • the smart mirror can extract the ROI in the corresponding partial image based on the first label. For example, if the first label is the forehead label, the smart mirror can extract the forehead ROI of fine lines, acne, and glossiness. Then, the ROI in the partial image is input into a neural network model (eg, a classification network) for skin detection, and a partial skin detection result is obtained. After that, the smart mirror can normalize all local skin test results after reading all skin test images, so as to obtain the final skin test results.
  • a neural network model eg, a classification network
  • the smart mirror can extract the ROI in the corresponding partial image based on the first label, after reading all the skin test images, normalize all the extracted ROI, and input the neural network model (for example, classification network) to perform skin detection to obtain the final skin detection result.
  • the neural network model For example, classification network
  • the method may further include: the smart mirror determines whether each partial image of the face image is collected.
  • the smart mirror collects each partial image of the face image, for example, the smart mirror collects the partial image of the forehead, the partial image of the left face and the partial image of the right face, it means that the smart mirror can obtain a complete face image through each partial image skin test results. That is to say, when each partial image of the face image is collected, the final skin measurement result can be regarded as the skin measurement result of the whole face image.
  • the final skin measuring result is obtained based on the partial skin measuring result and the historical skin measuring result.
  • the skin test result of the partial area where no suitable image has been collected this time can be predicted from the historical skin test results. For example, if the smart mirror only collects the partial skin measurement results of the partial image of the left face and the partial image of the right face, but does not collect the partial skin measurement results of the partial image of the forehead, the smart mirror can make predictions based on the historical skin measurement results.
  • the skin test result of the partial image of the forehead is obtained by combining the partial skin test result of the partial image of the left face and the partial skin test result of the partial image of the right face to obtain the final skin test result.
  • the smart mirror may also correct the final skin test results based on historical skin test results.
  • the smart mirror displays the skin test result.
  • the smart mirror may display the skin test result, and at this time, the skin test result includes the current skin state of the user. For example, condition analysis of pores, breakouts, blackheads or fine lines.
  • the smart mirror can also compare and analyze the current skin test results and historical skin test results, and finally display the skin test results to the user.
  • the skin test results also include the comparative analysis results and skin care suggestions.
  • FIG. 9 shows an interface for displaying skin test results.
  • (a) and (b) in Figure 9 respectively show the user the state of the current user's skin gloss and dark circles, as well as the comparative analysis results of the gloss in this skin test result and the gloss in the historical skin test results.
  • (c) and (d) in FIG. 9 provide the user with skin care advice given based on the current skin condition.
  • the smart mirror can present the skin test result to the user through the display screen.
  • the smart mirror after obtaining the skin test result, presents the skin test result to the user in response to the user triggering the operation of displaying the skin test result.
  • the operation for triggering the display of the skin test result may be a touch operation, a click operation, a voice instruction, or a gesture instruction, etc., which is not limited in this embodiment of the present application.
  • the smart mirror may also present the skin test result to the user without using the display screen.
  • the smart mirror may present the skin test result to the user through voice; or, the smart mirror may directly send the skin test result to The user's mobile phone and other personal electronic devices are used for the user to view the skin test results.
  • the above steps 701 to 705 may all be completed in the background without presenting the processing procedure to the user.
  • the user may be prompted with the content currently being executed.
  • the smart mirror can present the currently executed content to the user through screen display, voice prompts or signal lights. For example, "Analyzing Pores", “Analyzing Spots”, etc.
  • the smart mirror can perform skin test analysis based on the skin test image, and display the skin test result. Since the smart mirror automatically performs skin test analysis, the interaction process between the user and the smart mirror during the skin test process can be reduced, and the user experience can be improved.
  • FIG. 10 shows a scene diagram in an embodiment of the present application.
  • the user performs daily skin care activities during a preset time period, for example, from 8:00 to 10:00 in the morning.
  • the smart mirror can actively obtain the user's facial image, automatically screen and obtain the skin test image, and perform skin test analysis based on the skin test image.
  • the smart mirror presents the skin test results to the user.
  • the user only needs to carry out daily skin care according to his own habits, without actively initiating the skin test or adjusting his posture with the skin test, etc.
  • the smart mirror can automatically complete the whole process without the user's perception. skin testing process.
  • the user does not need to actively initiate the skin measurement, nor does it need to cooperate strictly during the skin measurement process, thereby reducing the interaction between the user and the smart mirror during the skin measurement process. It makes it easier for users to obtain skin test results and improves user experience.
  • the smart mirror can periodically and intensively obtain the user's facial image and perform skin measurement analysis based on the user's daily habits of using mirrors (for example, daily skin care time)
  • the skin detection method based on this solution can obtain more intensive It can effectively track and monitor the skin condition of users, and better provide users with skin care suggestions.
  • the initial images can be screened according to the screening method and the skin test analysis method provided in this solution, and the skin test images can be obtained based on the screened skin test images and/or historical skin test results. As a result, the accuracy and stability of the skin test results can be guaranteed.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the electronic device can be divided into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 11 shows a possible schematic diagram of the composition of the electronic device 1100 involved in the above embodiment.
  • the electronic device 1100 may include: image acquisition unit 1101 , image screening unit 1102 , skin test analysis unit 1103 and result output unit 1104 .
  • the image acquisition unit 1101 may be used to support the electronic device 1100 to perform the above-mentioned steps 301, 401 to 405, etc., and/or other processes for the techniques described herein.
  • Image screening unit 1102 may be used to support electronic device 1100 in performing steps 302, 501 to 505, etc. described above, and/or other processes for the techniques described herein.
  • the skin test analysis unit 1103 may be used to support the electronic device 1100 to perform the above-described steps 303, steps 701 to 704, etc., and/or other processes for the techniques described herein.
  • the result output unit 1104 may be used to support the electronic device 1100 in performing steps 705, etc. described above, and/or other processes for the techniques described herein.
  • the image acquisition unit 1101 may correspond to an image acquisition module (or a non-sensing skin start-stop control and image acquisition module); the image screening unit 1102 may correspond to an image screening module; the skin detection analysis unit 1103 may correspond to single/multiple The image skin measurement algorithm analysis module; the result output unit 1104 may correspond to the result output module.
  • the electronic device 1100 provided in this embodiment is used to execute the above-mentioned skin detection method, and thus can achieve the same effect as the above-mentioned implementation method.
  • the electronic device 1100 may include a processing module, a storage module, and a communication module.
  • the processing module can be used to control and manage the actions of the electronic device 1100. For example, it can be used to support the electronic device 1100 to execute the above image acquisition unit 1101, image screening unit 1102, skin test analysis unit 1103, and result output unit 1104. step.
  • the storage module may be used to support the electronic device 1100 to store program codes, data, and the like.
  • the communication module can be used to support the communication between the electronic device 1100 and other devices, for example, the communication with the wireless access device.
  • the processing module may be a processor or a controller. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • the processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and the like.
  • the storage module may be a memory.
  • the communication module may specifically be a device that interacts with other electronic devices, such as a radio frequency circuit, a Bluetooth chip, and a Wi-Fi chip.
  • Embodiments of the present application further provide an electronic device, including one or more processors and one or more memories.
  • the one or more memories are coupled to the one or more processors for storing computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform
  • the above related method steps implement the skin detection method in the above embodiment.
  • Embodiments of the present application further provide a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on an electronic device, the electronic device executes the above-mentioned related method steps to realize the above-mentioned embodiments skin detection method in .
  • Embodiments of the present application also provide a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the above-mentioned relevant steps, so as to realize the skin detection method executed by the electronic device in the above-mentioned embodiment.
  • the embodiments of the present application also provide an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a connected processor and a memory; wherein, the memory is used for storing computer execution instructions, and when the apparatus is running, The processor can execute the computer-executed instructions stored in the memory, so that the chip executes the skin detection method executed by the electronic device in each of the above method embodiments.
  • the electronic device, computer-readable storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, for the beneficial effects that can be achieved, reference may be made to the above-provided method. The beneficial effects in the corresponding method will not be repeated here.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Dermatology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本申请提供一种皮肤检测方法和电子设备,涉及电子技术领域,电子设备能够自动获取用户的初始图像,自动从中筛选出测肤图像,进而基于测肤图像进行测肤分析,从而在保证测肤结果的准确性和稳定性的同时,减少用户与电子设备的交互,提升用户体验。具体方案为:电子设备自动获取用户的N张初始图像(301),从N张初始图像中筛选出测肤图像(302),基于测肤图像,显示测肤结果(303),其中,初始图像包括人脸图像,测肤图像是其中人脸图像的特征满足测肤条件的初始图像。本申请用于皮肤检测的过程。

Description

一种皮肤检测方法和电子设备
本申请要求于2021年03月02日提交国家知识产权局、申请号为202110231643.7、申请名称为“一种皮肤检测方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及电子技术领域,尤其涉及一种皮肤检测方法和电子设备。
背景技术
爱美之心,人皆有之。随着生活水平的提高,越来越多的人愿意为了美投入更多的时间和金钱。其中,皮肤检测技术是能够满足人们对美的需求的一种技术。皮肤检测技术是指,通过使用传感器采集面部数据或者拍摄用户的面部图像,进而根据面部数据或面部图像进行测肤分析,获得关于皮肤各项特征的检测结果的一种技术。利用皮肤检测技术可以获得例如面部毛孔、色斑、痘痘、细纹等皮肤特征的检测结果,并能够基于皮肤检测结果向用户提供相应的护肤建议,从而帮助用户更好地护肤,改善皮肤状态。
目前,常用的皮肤检测方法是通过电子设备,例如手机采集用户的面部图像,进而根据电子设备中的预置算法对采集的面部图像进行分析,得到测肤结果。在采集用户的面部图像的过程中,为了获得高质量的面部图像,用户需要在采集面部图像的过程中严格配合,例如,使得用户的面部与电子设备的摄像头在合适范围的位置、距离或角度下,同时还可能需要保证周围的光线条件符合预设要求。甚至,在采集的面部图像不符合预置算法的情况下,还需要用户配合,重新拍照。因此,现有皮肤检测的过程用户交互流程复杂,从而用户体验较差。
发明内容
本申请实施例提供一种皮肤检测方法和电子设备,电子设备能够在预设时间段内主动获取包括用户的人脸图像的多张初始图像,从多张初始图像中筛选出测肤图像,基于测肤图像进行测肤分析,进而显示测肤结果。在本方案中,电子设备可以在用户无感知的情况下获取初始图像,从中筛选合适的测肤图像并进行测肤分析,整个测肤过程不需要用户的严格配合,能够在确保测肤结果的准确性和稳定性的同时,能够减少测肤过程中用户与电子设备之间的交互,使得用户更方便地获得测肤结果,提升用户体验。
为达到上述目的,本申请实施例采用如下技术方案:
一方面,本申请实施例提供了一种皮肤检测方法。该方法包括:电子设备自动获取用户的N张初始图像,N为大于1的整数,初始图像包括人脸图像;从N张初始图像中筛选出测肤图像,测肤图像是其中人脸图像的特征满足测肤条件的初始图像,人脸图像的特征包括:俯仰姿态,角度,亮度,遮挡,在初始图像中的占比,表情,清晰度中的一个或多个;基于测肤图像,显示测肤结果。
在该方案中,电子设备能够主动获取初始图像,从中筛选出符合测肤条件的测肤图像,进而基于测肤图像进行测肤分析。在该测肤方法中,用户无需主动发起测肤,在测肤过程中也无需严格配合,从而能够减少测肤过程中用户与电子设备之间的交互,使得用户更方便地获得测肤结果,提升用户体验。同时,电子设备主动获取初始图像,筛选出测肤图像并进行册府分析,能够获取到更密集的测肤结果,从而有效跟踪监测用户的皮肤状态,更好地向用户提供护肤建议。
在一种可能的设计中,测肤图像包括第一测肤图像和第二测肤图像;从N张初始图像中筛选出测肤图像,包括:针对N张初始图像中的每张初始图像:若初始图像中人脸图像的第一特征符合测肤要求,且初始图像中人脸图像的第二特征符合测肤要求,则将初始图像作为第一测肤图像;若初始图像中人脸图像的第一特征符合测肤要求,且初始图像中人脸图像的第一局部图像的第二特征符合测肤要求,初始图像中人脸图像的第二局部图像的第二特征不符合测肤要求,则将初始图像作为第二测肤图像;第一局部图像和第二局部图像为初始图像中不同的局部图像;第一特征包括:俯仰姿态,角度,表情,在初始图像中的占比中的一个或多个;第二特征包括:亮度,遮挡,清晰度中的一个或多个。
在该方案中,测肤图像可以包括第一测肤图像和第二测肤图像。其中,第一测肤图像是其中人脸图像整体符合测肤要求的初始图像,第二测肤图像是其中人脸图像中的局部图像符合测肤要求的初始图像。
在另一种可能的设计中,若测肤图像包括第二测肤图像,第二测肤图像对应第一标签,第一标签用于指示第二测肤图像中第一局部图像用于测肤分析;基于测肤图像,显示测肤结果,包括:基于第二测肤图像对应的第一标签,获取第一局部图像的局部测肤结果;基于局部测肤结果,显示测肤结果。
在该方案中,若测肤图像是第二测肤图像,则电子设备基于符合测肤要求的局部图像来获得测肤结果。
在另一种可能的设计中,基于局部测肤结果,显示测肤结果,包括:若局部测肤结果包括人脸图像中全部局部图像的局部测肤结果,则基于局部测肤结果,显示测肤结果;若局部测肤结果包括人脸图像中部分局部图像的局部测肤结果,则基于局部测肤结果和历史测肤结果,显示测肤结果。
这样,在未集齐所有局部图像的局部测肤结果的情况下,电子设备可以结合历史测肤结果,显示测肤结果。
在另一种可能的设计中,若测肤图像包括第一测肤图像,第一测肤图像对应第二标签,第二标签用于指示第一测肤图像中人脸图像整体用于测肤分析;基于测肤图像,显示测肤结果,包括:基于第一测肤图像对应的第二标签,获取第一测肤图像的人脸图像的测肤结果;显示测肤结果。
在该方案中,若测肤图像是第一测肤图像,则电子设备基于人脸图像整体来获得测肤结果。
在另一种可能的设计中,方法还包括:若从N张初始图像中未筛选出测肤图像,则基于历史测肤结果,显示测肤结果。
在该方案中,若不存在符合测肤要求的测肤图像,则可以基于历史测肤结果,预 测本次测肤结果。
在另一种可能的设计中,自动获取用户的N张初始图像,包括:在预设时间段内开启摄像头并采集预览图像;在预览图像中检测到人脸图像时,采用摄像头拍摄获取N张初始图像。
这样,电子设备可以在预设时间段内主动拍摄初始图像,从而减少用户与电子设备之间的交互,提升用户体验。
在另一种可能的设计中,在预览图像中检测到人脸图像时,拍摄获取N张初始图像,包括:在预览图像中检测到人脸图像时,根据人脸图像中眼睛的位置,调整摄像头的对焦点,采用调整对焦点后的摄像头拍摄获取N张初始图像。
在另一种可能的设计中,该方法还包括:若在预设时间段内已经获得测肤结果,则不开启摄像头。
这样,电子设备不需多次重复进行皮肤检测,能够节省算力和存储空间。
在另一种可能的设计中,电子设备为智能镜子。
由于用户通常每天都使用镜子来进行日常护肤等活动,因此,电子设备可以是智能镜子,从而更好地利用用户的生活习惯,密集地获取测肤结果,更有效地检测用户的皮肤状态,给出更好的护肤建议。
在另一种可能的设计中,在皮肤检测过程中,方法还包括:输出提示信息,提示信息用于向用户提示测肤过程。
这样,用户可以直观地了解目前皮肤检测进行的阶段。
另一方面,本申请实施例提供了一种电子设备,包括:摄像头;一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序被存储在存储器中,一个或多个计算机程序包括指令,当指令被电子设备执行时,使得电子设备执行本申请实施例提供的皮肤检测方法。
又一方面,本申请实施例提供了一种计算机可读存储介质,包括计算机指令,当计算机指令在计算机上运行时,使得计算机执行本申请实施例提供的皮肤检测方法。
又一方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行本申请实施例提供的皮肤检测方法。
上述其他方面对应的有益效果,可以参见关于方法方面的有益效果的描述,此处不予赘述。
附图说明
图1为本申请实施例提供的一种电子设备的结构示意图;
图2为本申请实施例提供的一种智能镜子的结构示意图;
图3为本申请实施例提供的一种皮肤检测方法的流程图;
图4为本申请实施例提供的一种图像获取过程的流程图;
图5为本申请实施例提供的一种图像筛选过程的流程图;
图6为本申请实施例提供的一种人脸图像的局部图像的示意图;
图7为本申请实施例提供的一种测肤分析过程的流程图;
图8为本申请实施例提供的一种皮肤检测的感兴趣区域的示意图;
图9为本申请实施例提供的一种界面图;
图10为本申请实施例提供的一种场景图;
图11为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
随着人们对美的需求的日益增长,越来越多的人定期进行皮肤检测,通过皮肤检测提供的量化的测肤结果,以更好地护理皮肤。因此,用户对皮肤检测技术的需求越来越强。
目前,用户可以通过多种方式进行皮肤检测。例如,用户可以通过专业的皮肤检测仪,或者可以通过手机中的测肤应用(APP)进行皮肤检测。
在一种现有技术中,用户可以通过专业皮肤检测仪(如visia皮肤检测仪)进行皮肤检测。在使用皮肤检测仪检测皮肤时,用户需要将头部放在固定的托座上,保持面部正面姿势、无遮挡、稳定不移动,然后皮肤检测仪按照预设的程序,先后开启不同的光照模式,并依次在不同光照模式下拍摄用户的面部图像。进而,皮肤检测仪采用预置的算法对拍摄的面部图像进行分析,在皮肤检测仪连接的电脑上显示皮肤检测结果。
在另一种现有技术中,用户还可以通过手机或智能镜子进行皮肤检测。以用户通过手机拍照进行皮肤检测为例,手机通常是通过所安装的测肤APP(如华为“爱肌肤”APP)进行皮肤检测的。在这种情况下,用户需要按照测肤APP的指令执行拍照过程,例如,根据APP语音、文字或图标等指令移动手机或面部,使得用户面部在手机摄像头的合适位置、距离、角度,并且光照合适。此时,手机采集用户的面部图像,采用测肤APP中预置的算法对拍摄的面部图像进行分析,显示皮肤检测结果。
在上述现有技术中,不论是使用皮肤检测仪还是使用手机或智能镜子采集用户的面部图像来进行皮肤检测时,均存在以下问题:
一方面,为了获得较高质量的面部图像,用户需要在整个测肤过程中严格配合。例如,用户需要将头部固定在合适的姿态、距离、光线等条件下;或者,用户需要根据APP提示移动手机或面部。此外,若拍摄的图像质量不满足预置的算法的要求时,用户还需要配合重新采集面部图像。在皮肤检测过程中,用户与皮肤检测仪或手机等电子设备的交互流程复杂繁琐,用户体验较差。
另一方面,在整个测肤过程中,需要用户主动发起测肤流程。也就是说,在现有技术的测肤过程中,只有在用户发起测肤流程,例如打开测肤APP或者开启皮肤检测仪的测肤模式时,电子设备才可以开始进行皮肤检测。这样,若用户发起测肤的次数 较少,则由于测肤结果较少,测肤周期较久,因此无法实现有效的皮肤状态跟踪监测,进而无法向用户提供完善的皮肤护理建议。
本申请实施例提供了一种皮肤检测方法,可以应用于电子设备,能够在不需要用户严格配合和主动发起的情况下,在预设时间段内主动获取包括用户的人脸图像的多张初始图像,从多张初始图像中主动筛选出测肤图像,基于测肤图像进行测肤分析,进而显示测肤结果。由于整个测肤过程不需要用户的主动发起和严格配合,因此能够在用户无感知的情况下进行测肤过程,从而提升用户体验。
此外,电子设备在获取到多张初始图像后,能够主动从中筛选出符合测肤要求的测肤图像,基于测肤图像和/或历史测肤结果,显示最终的测肤结果和护肤建议,从而确保了测肤结果的准确性和稳定性。
例如,本申请实施例中的电子设备可以是智能镜子、手机、平板电脑、可穿戴设备(例如智能手表)、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或个人数字助理(personal digital assistant,PDA)等移动终端,也可以是专业的相机等设备,本申请实施例对电子设备的具体类型不作任何限制。
示例性的,图1示出了电子设备100的一种结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),中央处理器(central processing unit,CPU)调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal  asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。在本申请的实施例中,显示屏194可以用于显示提示信息和测肤结果。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及 应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,历史测肤结果等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
在本申请的实施例中,处理器110通过运行存储在内部存储器121的指令,可以从采集的初始图像中筛选出测肤图像,基于测肤图像和/或历史测肤结果,向用户显示最终的测肤结果。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测电子设备100是否有遮挡,如果有遮挡,则可以进一步确定遮挡是否为用户的面部。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称触控屏。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。 在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
在本申请的实施例中,摄像头193可以在预设时间段内采集多张初始图像;处理器110通过运行存储在内部存储器121的指令,从采集的多张初始图像中筛选出符合测肤要求的测肤图像,进而基于测肤图像和存储的历史测肤结果,进行测肤信息。最终显示屏194显示得到的测肤结果和护肤建议。
图2示出了一种具有如图1所示结构的智能镜子的结构示意图。由于用户通常早晚都会使用镜子进行护肤等活动,因此智能镜子能够在用户进行护肤等日常活动时采集用户的面部图像。
如图2所示,智能镜子200可以包括摄像头201、镜面202、显示屏203、接近光传感器204、支架205、开关按钮206、底座207和LED环形光带208等。
其中,摄像头201可以是图1中的摄像头193,显示屏203可以是图1中的显示屏194,接近光传感器204可以是图1中的接近光传感器180G。
镜面202可以是由平面玻璃、镜面不锈钢板、铝等材质制成,用于显示镜面202前方物体或用户的图像。本申请实施例对镜面202的材质和形状不作限定。
显示屏203可以嵌在镜面202的任意位置,也可以设置在镜面202之外。若显示屏203嵌在镜面202中,则通常情况下显示屏203可以不显示任何信息,作为镜面202的一部分,方便用户查看镜面202中的人脸图像。显示屏203可以响应于用户的指示或者在测肤结束之后,再显示获得的测肤结果。
接近光传感器204可以设置在镜面202上,或者设置在镜面202之外,本申请实施例对此不作限定。
开关按钮206可以是物理按钮,也可以是触摸按钮,本申请对于开关按钮206的具体位置不作限定。
支架205和底座207用于支撑镜面202,本申请实施例中对支架205和底座206的形状和材质不作限定。在一些实施例中,支架205和底座206也可以折叠。
LED环形光带208围绕在镜面202外侧,用于补充光线,方便用户在梳妆或护肤过程中更好地查看镜面中呈现的人脸图像。
在一些实施例中,开关按钮206可以用来控制LED环形光带的开启或关闭。
在另一些实施例中,智能镜子也可以不包括LED环形光带208。
在一些实施例中,智能镜子中可以不包括开关按钮206,智能镜子响应于用户的语音指示执行开关按钮的功能。
可以理解的是,本申请实施例提供的智能镜子200基于图1所示的电子设备100的结构。本申请实施例示意的结构并不构成对智能镜子200的具体限定。
以下将以具有上述结构的智能镜子为例,对本申请实施例提供的皮肤检测方法进行阐述。如图3所示,该方法可以包括:
301、智能镜子自动获取用户的N张初始图像。
在本申请实施例中,智能镜子可以在预设时间段内开启摄像头,进而自动获取用户的N张初始图像。在该过程中,智能镜子获取用户的初始图像时不需要用户主动发起并严格配合,用户坐在镜子前开始日常护肤或梳妆时,智能镜子就可以自动通过摄像头获取用户的初始图像,使得在用户无感知的情况下获取初始图像,并进行后续的测肤分析。
其中,本申请实施例中的“无感知”指的是,不需要用户主动发起测肤并在测肤过程中严格配合,用户可以在开展日常活动(例如日常护肤)中,使得智能镜子便捷地获得用户的N张初始图像。
在本申请实施例中,N为大于1的整数。例如,N可以为10。
在一些实施例中,智能镜子可以预先设置N。在另一些实施例中,用户可以根据需求自行设置N。本申请实施例对此不作限定。
在本申请实施例中,如图4所示,智能镜子自动获取用户的N张初始图像,包括以下步骤:
401、智能镜子获取预设时间段和当前的系统时间,并确定当前的系统时间是否在预设时间内。
在一些实施例中,用户可以根据自己的日常护肤、梳妆等使用镜子的习惯来设置预设时间段。例如,若用户通常在早上8点-10点和晚上8点-10点之间进行护肤,则可以预先将预设时间段设置为8:00-10:00和20:00-22:00。这样,智能镜子在用户设置的预设时间段内自动获取用户的面部图像并进行皮肤检测。
在另一些实施例中,用户也可以根据自己的日常护肤、梳妆等使用镜子的习惯来设置检测周期。例如,若用户通常早晚分别进行一次护肤,则可以将检测周期设置为每12小时检测一次。这样,智能镜子可以根据设置的检测周期,周期性地获取用户的面部图像并进行皮肤检测。
本申请实施例对预设时间段或检测周期的具体时长不作限定,用户可以根据需求设置或调整。
在本申请实施例中,智能镜子还可以根据自身的时钟获取当前的系统时间,基于当前的系统时间,来确定当前是否应该获取初始图像。
在一些实施方式中,智能镜子周期性地获取当前的系统时间。
在智能镜子获取到预设时间段和当前的系统时间后,智能镜子可以通过比较当前的系统时间和设置的预设时间段,确定当前的系统时间是否在预设时间段内。
若当前的系统时间在预设时间段内,则智能镜子执行以下步骤402。
若当前的系统时间不在预设时间段内,则智能镜子可以不开启摄像头。由于智能镜子可以周期性地获取当前的系统时间,因此在一些实施例中,智能镜子可以在后面获取的当前的系统时间在预设时间段内时,执行以下步骤402。
402、智能镜子确定是否已经存在预设时间段内的测肤结果。
在智能镜子确定当前的系统时间在预设时间段内之后,智能镜子可以进一步确定是否已经存在预设时间段内的测肤结果。也可以认为,智能镜子可以确定在预设时间段内是否已经进行过皮肤检测。
若不存在预设时间段内的测肤结果,则智能镜子执行以下步骤403,进行后续流程。
若存在预设时间段内的测肤结果,则智能镜子不再重复进行皮肤检测。也就是说,智能镜子不需要开启摄像头。
可以认为,在预设时间段内,智能镜子最多进行一次皮肤检测,从而避免预设时间段内重复多次进行皮肤检测,以节省算力和存储空间。
403、智能镜子在确定存在遮挡时,开启摄像头,采集预览图像,并确定预览图像中是否存在人脸图像。
在智能镜子确定在预设时间段内还未进行过皮肤检测时,智能镜子可以进一步确定是否存在遮挡。
在一些实施例中,智能镜子通过接近光传感器来确定是否存在遮挡。例如,智能镜子通过读取接近光传感器的数据来判断1s内是否存在遮挡。
在本申请实施例中,若智能镜子未检测到遮挡,则不需要开启摄像头,流程结束。
若智能镜子检测到遮挡,则表示在预设时间段内,智能镜子前存在人或物体。此时,智能镜子可以开启摄像头采集预览图像,进而可以基于采集的预览图像判断是否存在人脸图像。
在一些实施例中,智能镜子可以通过已有的用于人脸检测的神经网络模型,来检测预览图像中是否存在人脸图像。本申请实施例对人脸检测的神经网络模型不作限定。
在一些实施例中,智能镜子可以还基于预览图像,确定用户是否在智能镜子的预设距离内。例如,智能镜子可以根据预览图像中人脸图像所占的比例,确定人脸对应的用户距离智能镜子的距离。
其中,若用户与智能镜子的距离超过预设距离,则说明预览图像中的人脸图像可能过小,不适合进行后续的测肤分析。
在另一些实施例中,若不存在预设时间段内的测肤结果,则智能镜子可以直接开启摄像头,采集预览图像,无需向用户提示。也就是说,智能镜子可以在预设时间段内主动采集预览图像。
404、若预览图像中存在人脸图像,则智能镜子调整摄像头的对焦点,拍摄N张初始图像。
若智能镜子在预览图像中检测到人脸图像,则根据人脸图像中眼睛的位置,调整摄像头的对焦点,并采用调整对焦点后的摄像头拍摄N张初始图像。
其中,由于在预览图像中存在人脸图像时拍摄初始图像,因此,初始图像中包括人脸图像。
在一些实施例中,若智能镜子确定用户与智能镜子之间的距离在预设距离内,则智能镜子调整摄像头的对焦点,拍摄初始图像。这样,能够确保用户距离智能镜子较近时才拍摄初始图像,避免在用户与智能镜子之间的距离较远时拍摄初始图像,而导致的初始图像中人脸过小,无法进行后续测肤处理的问题。
在一些实施例中,用户可以根据需求设置拍照时长。例如,拍照时长可以是5s,若拍照完毕后,超过拍照时长,则关闭摄像头;若未超过拍照时长,则继续确定是否已经拍摄了N张初始图像。若已经拍摄了N张初始图像,则关闭摄像头;若未拍摄完 成N张初始图像,则智能镜子重新采集预览图像,确定该预览图像中是否存在人脸图像。
在一些实施例中,智能镜子拍摄完N张初始图像后,关闭摄像头。
405、若预览图像中不存在人脸图像,并且在预设时间内持续未检测到人脸图像,则关闭摄像头。
若智能镜子在预览图像中未检测到人脸图像,则可以说明遮挡并不是用户面对镜子护肤或梳妆等造成的。
在本申请实施例中,若智能镜子的摄像头按照预设频率,在预设时间内采集的预览图像持续均为检测到人脸图像,则说明用户目前并没有面对镜子进行护肤或梳妆等活动,智能镜子关闭摄像头。
在一些实施例中,上述步骤401-步骤405可以全部在后台完成,不需要向用户呈现处理过程。
在另一些实施例中,智能镜子在执行上述步骤401-步骤405时,可以向用户提示当前执行的内容。例如,智能镜子在获取初始图像的过程中,可以通过屏幕显示、语音提示或信号灯等方式,向用户呈现当前执行的内容。例如,“检测到人脸图像”、“已拍摄第一张图像”或“已拍摄完N张图像”等。
通过上述步骤401-步骤405,智能镜子可以获取到N张包括人脸图像的初始图像。智能镜子可以在用户正常的护肤或梳妆等面对镜子的日常活动中,自动采集初始图像。在采集初始图像的过程中,用户不需要主动发起采集图像的流程,也不需要调整坐姿、光线、方向来配合智能镜子采集初始图像,也就是说,智能镜子可以在用户无感知的情况下,采集初始图像,从而能够便捷地采集初始图像,提升用户体验。
智能镜子在获取到用户的N张初始图像之后,可以进行图像筛选,执行下面步骤302的内容。
302、智能镜子从N张初始图像中筛选出测肤图像,测肤图像是其中人脸图像的特征满足测肤条件的初始图像。
智能镜子获取到用户的N张初始图像之后,可以从中筛选出符合测肤条件的初始图像,作为测肤图像,进而根据测肤图像进行测肤分析。
在本申请实施例中,智能镜子自动从N张初始图像中筛选测肤图像。其中,测肤图像可以包括第一测肤图像或第二测肤图像。第一测肤图像中人脸图像的整体用于测肤分析,第二测肤图像中人脸图像中的局部图像用于测肤分析。
在本申请实施例中,示例性的,如图5所示,智能镜子从N张初始图像中筛选出测肤图像,可以包括以下步骤:
501、针对N张初始图像中的每一张初始图像,智能镜子判断每一张初始图像中的人脸图像的第一特征是否符合测肤要求。
在一些实施例中,智能镜子可以将获取到的初始图像缓存起来,在获取到预设的N张初始图像之后,智能镜子可以逐张读取初始图像,判断每一张初始图像中的人脸图像的第一特征是否符合测肤要求。
在另一些实施例中,智能镜子也可以在获取到每一张初始图像的时候,直接判断获取到的该张初始图像中人脸图像的第一特征是否符合测肤要求。
若智能镜子确定一张初始图像中的人脸图像的第一特征符合测肤要求,则智能镜子执行以下步骤502。
若智能镜子确定一张初始图像中的人脸图像的第一特征不符合测肤要求,则流程结束。在一些实施例中,智能镜子也可以记录该初始图像不符合测肤要求。
在本申请实施例中,第一特征包括:俯仰姿态、角度、表情、在初始图像中的占比等特征中的一个或多个。
其中,俯仰姿态指的是用户低头或抬头的状态。可以认为,判断俯仰姿态是否符合测肤要求指的是,判断用户低头或抬头的角度是否在合适范围内。
在一些实施例中,俯仰姿态可以根据人脸图像中的三庭五眼的比例来判断。
其中,角度指的是用户面部左右倾斜的角度。可以认为,判断角度是否符合测肤要求指的是,判断用户头部左倾或右倾的角度是否在合适范围内。
在一些实施例中,角度可以根据人脸图像的左右对称性来判断。
其中,表情指的是人脸图像中用户的表情。可以理解的是,若用户面部存在一些表情,则人脸图像可能无法反映用户皮肤的真实状态,例如,用户皱眉的时候,人脸图像中的细纹增多,从而干扰后续的测肤分析。
在一些实施例中,表情可以根据神经网络模型来判断。
其中,在初始图像中的占比指的是人脸图像在初始图像中的大小。可以认为,判断在初始图像中的占比是否符合测肤要求指的是,判断用户与智能镜子之间的距离是否在合适范围内。
在一些实施例中,根据人脸图像在初始图像中的占比,可以计算人脸图像在初始图像中的大小,进而可以得到用户与智能镜子之间的距离。
在一些实施例中,若第一特征包括多项特征,则智能镜子可以依次判断每项特征是否符合测肤要求。例如,第一特征包括俯仰姿态、角度、表情和在初始图像中的占比,则智能镜子可以依次判断每一项特征是否符合测肤要求,所有特征均符合测肤要求,进行下一步;一旦有一项特征不符合测肤要求,则说明该初始图像不符合测肤要求,停止判断后续特征是否符合测肤要求,直接筛除该初始图像。
需要说明的是,本申请实施例对智能镜子依次判断每一项特征是否符合测肤要求的次序不作限定。
在另一些实施例中,若第一特征包括多项特征,则智能镜子也可以同时判断多项特征是否符合测肤要求。一旦多项特征中有一项不符合测肤要求,则筛除该初始图像。
可以理解的是,第一特征中包括的特征越多,则基于筛选所得的测肤图像得到的测肤结果越准确。
502、智能镜子判断初始图像中人脸图像的第二特征是否符合测肤要求。
在初始图像中人脸图像的第一特征符合测肤要求的情况下,智能镜子可以继续判断初始图像中人脸图像的第二特征是否符合测肤要求。
需要说明的是,初始图像中人脸图像的第二特征,指的是初始图像中人脸图像整体的第二特征。也就是说,人脸图像的第二特征表示整个人脸图像的第二特征。
在本申请实施例中,第二特征包括亮度、遮挡、清晰度等特征中的一个或多个。
其中,判断初始图像中人脸图像的第二特征是否符合测肤要求指的是,判断人脸 图像整体的亮度、遮挡、清晰度等特征是否符合测肤要求。
在一些实施例中,亮度可以根据人脸图像的灰度确定。其中,灰度值越大,亮度越大。
在一些实施例中,遮挡可以根据已有的模型来确定,此处不再赘述。
在一些实施例中,清晰度可以根据人脸图像的锐度确定。其中,锐度值越大,清晰度越高。
在一些实施例中,若第二特征中包括多项特征,则智能镜子可以依次判断每项特征是否符合测肤要求。
在另一些实施例中,若第二特征中包括多项特征,则智能镜子可以同时判断每项特征是否符合测肤要求。
本申请实施例对此不作限定,对依次判断每项特征是否符合要求的次序也不作限定。
503、在初始图像中人脸图像的第二特征符合测肤要求时,智能镜子将该初始图像作为第一测肤图像。
若初始图像中人脸图像的第二特征符合测肤要求,则说明人脸图像整体符合测肤要求。可以认为,该初始图像中的人脸图像整体可以用于后续的测肤分析。
在本申请实施例中,初始图像中人脸图像的第一特征符合测肤要求,且第二特征符合测肤要求,则可以将该初始图像作为第一测肤图像,用于后续测肤分析。
在一些实施例中,智能镜子得到第一张第一测肤图像时,直接采用该第一测肤图像进行后续测肤分析,不再判断其他初始图像是否符合测肤要求。这种情况下,能够提高皮肤检测的速度。
在另一些实施例中,智能镜子可以将符合测肤要求的第一测肤图像缓存,在得到新的第一测肤图像时,判断是否存在已有的第一测肤图像,若存在,则智能镜子可以根据缓存的第一测肤图像中人脸图像的第二特征的综合得分,以及新的第一测肤图像中人脸图像的第二特征的综合得分,选择得分较高的图像作为用于测肤分析的第一测肤图像。这样,由于第一测肤图像是筛选出来的质量最高的图像,因此能够提高后续测肤分析的准确性。
在一些实施例中,智能镜子中的测肤算法可以仅基于一张第一测肤图像来进行测肤分析。例如,这一张第一测肤图像可以是上述第一张第一测肤图像,或者质量最高的第一测肤图像。
在另一些实施例中,智能镜子可以将所有符合测肤要求的第一测肤图像均缓存起来,并且智能镜子中的测肤算法基于缓存的所有第一测肤图像进行测肤分析。也就是说,第一测肤图像也可以是多张。
在一些实施例中,第一测肤图像还可以对应第二标签(也可以称为整体标签),其中,第二标签用于指示第一测肤图像中人脸图像整体用于测肤分析。
504、在初始图像中人脸图像的第二特征不符合测肤要求时,针对初始图像中人脸图像的局部图像,智能镜子判断局部图像的第二特征是否符合测肤要求。
若初始图像中人脸图像的第二特征不符合测肤要求,也就是说,人脸图像整体的第二特征不符合测肤要求,则智能镜子可以判断人脸图像的每一个局部图像的第二特 征是否符合测肤要求。
若初始图像中人脸图像的某一个局部图像的第二特征符合测肤要求,则智能镜子执行以下步骤505。
若初始图像中人脸图像的某一个局部图像的第二特征不符合测肤要求,则流程结束。在一些实施例中,智能镜子也可以记录该初始图像不符合测肤要求。
在本申请实施例中,如图6所示,可以将人脸图像分割为三个区域,即额头区域、左脸区域和右脸区域,每个区域对应一个局部图像。也就是说,人脸图像可以被分割为额头局部图像、左脸局部图像和右脸局部图像。智能镜子可以判断上述三个局部图像中各局部图像的第二特征是否符合测肤要求。
可以理解的是,将人脸图像分割为上述三个局部图像仅为一种示例,本申请实施例对人脸图像的局部图像的分割方式和局部图像的数量不作限定。
在一些实施例中,智能镜子可以依次判断各局部图像的第二特征是否符合测肤要求。
在另一些实施例中,智能镜子也可以同时判断各局部图像的第二特征是否符合测肤要求。
在本申请实施例中,判断局部图像的第二特征是否符合测肤要求指的是,判断局部图像的亮度、遮挡、清晰度等是否符合测肤要求。
示例性的,智能镜子可以将人脸图像分割为额头局部图像、左脸局部图像和右脸局部图像这三个局部图像,分别判断各局部图像的亮度、遮挡、清晰度是否符合测肤要求。
可以理解的是,由于是在人脸图像整体的第二特征不符合测肤要求的情况下,智能镜子才判断人脸图像的局部图像的第二特征是否符合测肤要求,因此,该人脸图像中至少一个局部图像的第二特征不符合测肤要求。
505、智能镜子将该初始图像作为第二测肤图像。
在初始图像中的某一个局部图像的第二特征符合测肤要求的情况下,智能镜子可以将该初始图像作为第二测肤图像,第二测肤图像中的该局部图像可以用于测肤分析。
在本申请实施例中,第二测肤图像可以对应第一标签(也可以称为局部标签),其中,第一标签用于指示第二测肤图像中用于测肤分析的局部图像。
例如,第二测肤图像对应的第一标签是额头标签,则表示第二测肤图像中人脸图像的额头局部图像符合测肤要求,该额头局部图像可以用于后续的测肤分析。
在一些实施例中,智能镜子得到第一局部图像符合测肤要求的第二测肤图像时,可以不再判断后续初始图像中人脸图像的第一局部图像是否符合测肤要求,仅需要判断后续初始图像中人脸图像的除第一局部图像之外的其他局部图像是否符合测肤要求。
在另一些实施例中,智能镜子得到第一局部图像符合测肤要求的第二测肤图像时,可以将该第二测肤图像以及对应的第一标签缓存,在得到新的符合测肤要求的第一局部图像时,判断是否存在已有的第一局部图像,若存在,则智能镜子可以根据缓存的第二测肤图像中的第一局部图像的第二特征的综合得分,以及新的第一局部图像的第二特征的综合得分,选择得分较高的第一局部图像所在的第二测肤图像作为最终用于测肤分析的第二测肤图像。这样,由于用于测肤分析的第二测肤图像中的第一局部图 像质量较高,因此能够提高后续测肤分析的准确性。
在一些实施例中,智能镜子中的测肤算法可以仅基于各局部对应的最多一张局部图像来进行测肤分析。例如,智能镜子用于测肤分析的第二测肤图像仅包括额头局部图像的质量最高的一张初始图像、左脸局部图像的质量最高的一张初始图像和右脸局部图像的质量最高的一张初始图像。
在另一些实施例中,智能镜子可以将存在一个或多个局部图像符合测肤要求的所有初始图像均缓存起来,作为第二测肤图像,并且智能镜子中的测肤算法基于这些第二测肤图像和各自对应的第一标签进行测肤分析。
在一些实施例中,上述步骤501-步骤505可以全部在后台完成,不需要向用户呈现处理过程。
在另一些实施例中,智能镜子在执行上述步骤501-步骤505时,可以向用户提示当前执行的内容。例如,智能镜子在筛选初始图像的过程中,可以通过屏幕显示、语音提示或信号灯等方式,向用户呈现当前执行的内容。例如,“初始图像1整体合格”、“初始图像2的额头局部图像合格”等。
通过上述步骤501-步骤505,智能镜子可以自动从获取到的N张初始图像中筛选出测肤图像,进而基于测肤图像进行后续测肤分析。由于智能镜子自动对初始图像进行筛选,不需要用户发起图像筛选,因此,可以认为,智能镜子在自动获取到初始图像之后,自动继续在用户无感知的情况下从中筛选出测肤图像,以用于后续的测肤分析,从而减少测肤过程中用户与智能镜子之间的交互流程,提升用户体验。
智能镜子筛选出测肤图像后,可以自动基于测肤图像进行测肤分析,获得测肤结果,即执行下面步骤303的内容。
303、智能镜子基于测肤图像进行测肤分析,显示测肤结果。
智能镜子从初始图像中筛选出测肤图像之后,可以自动基于测肤图像进行测肤分析,并显示测肤结果。
在一些实施例中,若存在第一测肤图像,则智能镜子基于第一测肤图像进行测肤分析,得到测肤结果。
在一些实施例中,若不存在第一测肤图像,存在第二测肤图像,则智能镜子基于第二测肤图像进行测肤分析,得到测肤结果。
在另一些实施例中,智能镜子基于第一测肤图像和第二测肤图像分别进行测肤分析,将结果较好的测肤结果作为最终的测肤结果。
在本申请实施例中,如图7所示,智能镜子基于测肤图像进行测肤分析,显示测肤结果,包括以下步骤:
701、智能镜子确定是否存在测肤图像。
在步骤302之后,智能镜子可以逐张读取测肤图像。在该过程中,智能镜子可以确定是否存在测肤图像。
若智能镜子确定存在测肤图像,则智能镜子执行以下步骤702。
若智能镜子确定不存在测肤图像,则说明智能镜子从初始图像中未筛选到合适的测肤图像。例如,假设用户所处环境的光线较差,使得拍摄的初始图像中人脸图像的整体或局部的亮度均不符合要求,则本次获取的初始图像均不能作为测肤图像。
在智能镜子确定不存在测肤图像的情况下,智能镜子可以基于历史测肤结果获得预测的测肤结果,并显示该预测的测肤结果。
在一些实施例中,智能镜子可以根据线性回归或历史拟合等方法,基于历史测肤结果获得预测的测肤结果。本申请实施例对智能镜子基于历史测肤结果获取预测的测肤结果的方法不作限定。
在一些实施例中,历史测肤结果可以存储在内部存储器或云端,本申请实施例对此不作限定。
702、智能镜子确定该测肤图像是否为第一测肤图像。
在本申请实施例中,智能镜子可以根据测肤图像是否存在第二标签,来确定该测肤图像是否为第一测肤图像。
若该测肤图像存在第二标签(即整体标签),则智能镜子确定该测肤图像为第一测肤图像。
在另一些实施例中,若该测肤图像不存在第二标签,也不存在其他标签(例如,第一标签),则智能镜子也可以确定该测肤图像为第一测肤图像。
703、若智能镜子确定该测肤图像为第一测肤图像,则基于该第一测肤图像获取第一测肤图像的人脸图像的测肤结果。
在本申请实施例中,若智能镜子确定该测肤图像为第一测肤图像,则说明可以基于该第一测肤图像中人脸图像整体来获取测肤结果。
在一些实施例中,智能镜子可以基于该第一测肤图像中的人脸图像,提取各检测项的感兴趣区域(region of interest,ROI),并将各检测项的ROI输入神经网络模型(例如分类网络)进行皮肤检测。
需要说明的是,本申请实施例对皮肤检测采用的神经网络模型不作限定。
示例性的,图8示出了各检测项的ROI。如图8中的(a)所示,毛孔、色斑检测ROI一般为脸颊;如图8中的(b)所示,细纹检测ROI一般为额头和眼下;如图8中的(c)所示,黑头检测ROI一般为鼻子;如图8中的(d)所示,眼袋、黑眼圈检测ROI一般为眼睛周围;如图8中的(e)所示,痘痘、光泽度检测ROI一般为额头、鼻子、脸颊和下巴。
704、若智能镜子确定测肤图像不是第一测肤图像,是第二测肤图像,则基于第二测肤图像对应的第一标签,获取对应的局部图像的局部测肤结果,进而获取测肤结果。
由于测肤图像包括第一测肤图像和第二测肤图像,因此若存在测肤图像,并且测肤图像不是第一测肤图像时,则可以说明该测肤图像为第二测肤图像。
在本申请实施例中,基于第二测肤图像对应的第一标签,智能镜子可以确定用于测肤分析的局部图像。例如,第一标签是额头标签,则说明该第二测肤图像中人脸图像的额头局部图像可以用于测肤分析。
在一些实施例中,智能镜子可以基于第一标签,提取对应的局部图像中的ROI。例如,若第一标签为额头标签,则智能镜子可以提取细纹、痘痘、光泽度的额头ROI。进而将局部图像中的ROI输入神经网络模型(例如,分类网络)进行皮肤检测,获得局部测肤结果。之后,智能镜子可以在读取完所有测肤图像时,将所有的局部测肤结果进行归一化,从而获取最终的测肤结果。
在另一些实施例中,智能镜子可以基于第一标签,提取对应的局部图像中的ROI,在读取完所有测肤图像时,将所提取的全部ROI进行归一化,输入神经网络模型(例如,分类网络)进行皮肤检测,从而获取最终的测肤结果。
在本申请实施例中,智能镜子基于第二测肤图像获取最终的测肤结果之前还可以包括:智能镜子确定是否集齐了人脸图像的各局部图像。
若智能镜子集齐了人脸图像的各局部图像,例如,智能镜子集齐了额头局部图像、左脸局部图像和右脸局部图像,则说明智能镜子可以通过各局部图像得到完整的人脸图像的测肤结果。也就是说,在集齐人脸图像的各局部图像的情况下,最终的测肤结果可以认为是人脸图像整体的测肤结果。
在一些实施例中,若智能镜子未集齐人脸图像的各局部图像,则基于局部测肤结果和历史测肤结果,获得最终的测肤结果。
若智能镜子未集齐人脸图像的各局部图像,则本次未采集到合适图像的局部区域的测肤结果可以由历史测肤结果预测得到。例如,智能镜子仅采集到左脸局部图像的局部测肤结果和右脸局部图像的局部测肤结果,未采集到额头局部图像的局部测肤结果,则智能镜子可以基于历史测肤结果得到预测的额头局部图像的测肤结果,进而结合左脸局部图像的局部测肤结果和右脸局部图像的局部测肤结果,得到最终的测肤结果。
在一些实施例中,智能镜子也可以基于历史测肤结果对最终的测肤结果进行修正。
705、智能镜子显示测肤结果。
在一些实施例中,智能镜子获得测肤结果之后,可以显示测肤结果,此时测肤结果包括用户当前的皮肤状态。例如,毛孔、痘痘、黑头或细纹等的情况分析。
在另一些实施例中,智能镜子还可以将本次的测肤结果与历史测肤结果进行对比分析,最终向用户显示测肤结果,此时测肤结果还包括对比分析结果以及护肤建议。
示例性的,图9示出了一种显示测肤结果的界面。图9中的(a)和(b)分别向用户展示了当前用户皮肤光泽度和黑眼圈的状态,以及本次测肤结果中光泽度与历史测肤结果中光泽度的比较分析结果。图9中的(c)和(d)向用户提供了基于当前皮肤状态给出的护肤建议。
在一些实施例中,智能镜子在获得测肤结果后,可以通过显示屏向用户呈现测肤结果。
在另一些实施例中,智能镜子在获得测肤结果后,响应于用户触发显示测肤结果的操作,向用户呈现测肤结果。例如,用户在日常护肤结果后,可以点击镜子特定区域,出发显示测肤结果。其中,触发显示测肤结果的操作可以是触摸操作、点击操作、语音指示或手势指示等,本申请实施例对此不作限定。
在又一些实施例中,智能镜子还可以不通过显示屏向用户呈现测肤结果,例如,智能镜子还可以通过语音向用户呈现测肤结果;或者,智能镜子还可以直接将测肤结果发送至用户的手机等个人电子设备,供用户查看测肤结果。
在一些实施例中,上述步骤701-步骤705可以全部在后台完成,不需要向用户呈现处理过程。
在另一些实施例中,智能镜子在执行上述步骤701-步骤705时,可以向用户提示 当前执行的内容。例如,智能镜子在测肤分析的过程中,可以通过屏幕显示、语音提示或信号灯等方式,向用户呈现当前执行的内容。例如,“正在分析毛孔”、“正在分析色斑”等。
通过上述步骤701-步骤705,智能镜子可以基于测肤图像进行测肤分析,并显示测肤结果。由于智能镜子自动进行测肤分析,从而能够减少测肤过程中用户与智能镜子之间的交互流程,提升用户体验。
图10示出了本申请实施例中的一种场景图。如图10所示,用户在预设时间段,例如早上8:00-10:00期间,进行日常护肤活动。用户面对智能镜子进行护肤时,智能镜子可以主动获取用户的面部图像,自动筛选获得测肤图像,并基于测肤图像进行测肤分析。在测肤分析结束时,或者在响应于用户查看测肤结果的操作时,智能镜子向用户呈现测肤结果。
在如图10所示的场景中,用户只需要按照自己的习惯进行日常护肤,无需主动发起测肤或者配合测肤调整自己的姿态等,智能镜子可以在用户无感知的情况下,自动完成整个测肤过程。
根据上述内容可知,在本申请实施例提供的皮肤检测方法中,用户无需主动发起测肤,在测肤过程中也无需严格配合,从而能够减少测肤过程中用户与智能镜子之间的交互,使得用户更方便地获得测肤结果,提升用户体验。
同时,由于智能镜子可以基于用户日常使用镜子的习惯(例如,日常护肤时间)周期性、密集地获取用户的面部图像并进行测肤分析,因此,基于本方案的皮肤检测方法,可以获取更密集的测肤结果,有效跟踪监测用户的皮肤状态,更好地向用户提供护肤建议。
此外,智能镜子自动获取多张初始图像后,可以按照本方案提供的筛选方法和测肤分析方法来对初始图像进行筛选,并基于筛选后的测肤图像和/或历史测肤结果获得测肤结果,从而能够保证测肤结果的准确性和稳定性。
可以理解的是,为了实现上述功能,电子设备包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图11示出了上述实施例中涉及的电子设备1100的一种可能的组成示意图,如图11所示,该电子设备1100可以包括:图像获取单元1101、图像筛选单元1102、测肤分析单元1103和结果输出单元1104。
其中,图像获取单元1101可以用于支持电子设备1100执行上述步骤301、步骤401至步骤405等,和/或用于本文所描述的技术的其他过程。
图像筛选单元1102可以用于支持电子设备1100执行上述步骤302,步骤501至步骤505等,和/或用于本文所描述的技术的其他过程。
测肤分析单元1103可以用于支持电子设备1100执行上述步骤303,步骤701至步骤704等,和/或用于本文所描述的技术的其他过程。
结果输出单元1104可以用于支持电子设备1100执行上述步骤705等,和/或用于本文所描述的技术的其他过程。
在一些实施例中,图像获取单元1101可以对应图像获取模块(或者无感测肤启停控制与图像获取模块);图像筛选单元1102可以对应图像筛选模块;测肤分析单元1103可以对应单/多图测肤算法分析模块;结果输出单元1104可以对应结果输出模块。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例提供的电子设备1100,用于执行上述皮肤检测方法,因此可以达到与上述实现方法相同的效果。
在采用集成的单元的情况下,电子设备1100可以包括处理模块、存储模块和通信模块。其中,处理模块可以用于对电子设备1100的动作进行控制管理,例如,可以用于支持电子设备1100执行上述图像获取单元1101、图像筛选单元1102、测肤分析单元1103和结果输出单元1104执行的步骤。存储模块可以用于支持电子设备1100存储程序代码和数据等。通信模块,可以用于支持电子设备1100与其他设备的通信,例如与无线接入设备的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器的组合等等。存储模块可以是存储器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子设备交互的设备。
本申请实施例还提供一种电子设备,包括一个或多个处理器以及一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述相关方法步骤实现上述实施例中的皮肤检测方法。
本申请的实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的皮肤检测方法。
本申请的实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中电子设备执行的皮肤检测方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中电子设备执行的皮肤检测方法。
其中,本实施例提供的电子设备、计算机可读存储介质、计算机程序产品或芯片 均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种皮肤检测方法,其特征在于,应用于电子设备,所述方法包括:
    自动获取用户的N张初始图像,N为大于1的整数,所述初始图像包括人脸图像;
    从所述N张初始图像中筛选出测肤图像,所述测肤图像是其中人脸图像的特征满足测肤条件的初始图像,所述人脸图像的特征包括:俯仰姿态,角度,亮度,遮挡,在所述初始图像中的占比,表情,清晰度中的一个或多个;
    基于所述测肤图像,显示测肤结果。
  2. 根据权利要求1所述的方法,其特征在于,所述测肤图像包括第一测肤图像和第二测肤图像;
    所述从所述N张初始图像中筛选出测肤图像,包括:
    针对所述N张初始图像中的每张初始图像:
    若所述初始图像中人脸图像的第一特征符合测肤要求,且所述初始图像中人脸图像的第二特征符合测肤要求,则将所述初始图像作为所述第一测肤图像;
    若所述初始图像中人脸图像的第一特征符合测肤要求,且所述初始图像中人脸图像的第一局部图像的第二特征符合测肤要求,所述初始图像中人脸图像的第二局部图像的第二特征不符合测肤要求,则将所述初始图像作为所述第二测肤图像;所述第一局部图像和所述第二局部图像为所述初始图像中不同的局部图像;
    其中,所述第一特征包括:俯仰姿态,角度,表情,在所述初始图像中的占比中的一个或多个;
    所述第二特征包括:亮度,遮挡,清晰度中的一个或多个。
  3. 根据权利要求2所述的方法,其特征在于,若所述测肤图像包括所述第二测肤图像,第二测肤图像对应第一标签,所述第一标签用于指示所述第二测肤图像中所述第一局部图像用于测肤分析;
    所述基于所述测肤图像,显示测肤结果,包括:
    基于所述第二测肤图像对应的第一标签,获取所述第一局部图像的局部测肤结果;
    基于所述局部测肤结果,显示所述测肤结果。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述局部测肤结果,显示所述测肤结果,包括:
    若所述局部测肤结果包括所述人脸图像中全部局部图像的局部测肤结果,则基于所述局部测肤结果,显示所述测肤结果;
    若所述局部测肤结果包括所述人脸图像中部分局部图像的局部测肤结果,则基于所述局部测肤结果和历史测肤结果,显示所述测肤结果。
  5. 根据权利要求2-4中任一项所述的方法,其特征在于,若所述测肤图像包括所述第一测肤图像,所述第一测肤图像对应第二标签,所述第二标签用于指示所述第一测肤图像中人脸图像整体用于测肤分析;
    所述基于所述测肤图像,显示测肤结果,包括:
    基于所述第一测肤图像对应的第二标签,获取所述第一测肤图像的人脸图像的测肤结果;
    显示所述测肤结果。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:
    若从所述N张初始图像中未筛选出所述测肤图像,则基于历史测肤结果,显示所述测肤结果。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述自动获取用户的N张初始图像,包括:
    在预设时间段内开启摄像头并采集预览图像;
    在所述预览图像中检测到所述人脸图像时,采用所述摄像头拍摄获取所述N张初始图像。
  8. 根据权利要求7所述的方法,其特征在于,所述在所述预览图像中检测到所述人脸图像时,拍摄获取所述N张初始图像,包括:
    在所述预览图像中检测到所述人脸图像时,根据所述人脸图像中眼睛的位置,调整所述摄像头的对焦点,采用调整对焦点后的所述摄像头拍摄获取所述N张初始图像。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述电子设备为智能镜子。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,在皮肤检测过程中,所述方法还包括:
    输出提示信息,所述提示信息用于向用户提示测肤过程。
  11. 一种电子设备,其特征在于,包括:
    摄像头;
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-10中任一项所述的皮肤检测方法。
  12. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在计算机上运行时,使得所述计算机执行如权利要求1-10中任一项所述的皮肤检测方法。
  13. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-10中任一项所述的皮肤检测方法。
PCT/CN2022/078741 2021-03-02 2022-03-02 一种皮肤检测方法和电子设备 WO2022184084A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110231643.7 2021-03-02
CN202110231643.7A CN114983338A (zh) 2021-03-02 2021-03-02 一种皮肤检测方法和电子设备

Publications (1)

Publication Number Publication Date
WO2022184084A1 true WO2022184084A1 (zh) 2022-09-09

Family

ID=83018663

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078741 WO2022184084A1 (zh) 2021-03-02 2022-03-02 一种皮肤检测方法和电子设备

Country Status (2)

Country Link
CN (1) CN114983338A (zh)
WO (1) WO2022184084A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117831104A (zh) * 2023-12-30 2024-04-05 佛山瀚镜智能科技有限公司 一种智能镜柜及其控制方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115829911A (zh) * 2022-07-22 2023-03-21 宁德时代新能源科技股份有限公司 检测系统的成像一致性的方法、装置和计算机存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160162728A1 (en) * 2013-07-31 2016-06-09 Panasonic Intellectual Property Corporation Of America Skin analysis method, skin analysis device, and method for controlling skin analysis device
CN108363964A (zh) * 2018-01-29 2018-08-03 杭州美界科技有限公司 一种预先处理的皮肤皱纹评估方法及系统
CN109948476A (zh) * 2019-03-06 2019-06-28 南京七奇智能科技有限公司 一种基于计算机视觉的人脸皮肤检测系统及其实现方法
US20200305784A1 (en) * 2016-06-27 2020-10-01 Koninklijke Philips N.V Device and method for skin gloss detection
CN111860169A (zh) * 2020-06-18 2020-10-30 北京旷视科技有限公司 皮肤分析方法、装置、存储介质及电子设备
CN112000221A (zh) * 2020-07-14 2020-11-27 华为技术有限公司 自动检测肌肤的方法、自动指导护肤化妆的方法及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160162728A1 (en) * 2013-07-31 2016-06-09 Panasonic Intellectual Property Corporation Of America Skin analysis method, skin analysis device, and method for controlling skin analysis device
US20200305784A1 (en) * 2016-06-27 2020-10-01 Koninklijke Philips N.V Device and method for skin gloss detection
CN108363964A (zh) * 2018-01-29 2018-08-03 杭州美界科技有限公司 一种预先处理的皮肤皱纹评估方法及系统
CN109948476A (zh) * 2019-03-06 2019-06-28 南京七奇智能科技有限公司 一种基于计算机视觉的人脸皮肤检测系统及其实现方法
CN111860169A (zh) * 2020-06-18 2020-10-30 北京旷视科技有限公司 皮肤分析方法、装置、存储介质及电子设备
CN112000221A (zh) * 2020-07-14 2020-11-27 华为技术有限公司 自动检测肌肤的方法、自动指导护肤化妆的方法及终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117831104A (zh) * 2023-12-30 2024-04-05 佛山瀚镜智能科技有限公司 一种智能镜柜及其控制方法
CN117831104B (zh) * 2023-12-30 2024-05-24 佛山瀚镜智能科技有限公司 一种智能镜柜及其控制方法

Also Published As

Publication number Publication date
CN114983338A (zh) 2022-09-02

Similar Documents

Publication Publication Date Title
US11497406B2 (en) Apparatus and method for enhancing accuracy of a contactless body temperature measurement
US10226184B2 (en) Apparatus and method for enhancing accuracy of a contactless body temperature measurement
WO2022184084A1 (zh) 一种皮肤检测方法和电子设备
KR102420100B1 (ko) 건강 상태 정보를 제공하는 전자 장치, 그 제어 방법, 및 컴퓨터 판독가능 저장매체
CN109793498B (zh) 一种皮肤检测方法及电子设备
WO2021238995A1 (zh) 用于肌肤检测的电子设备的交互方法及电子设备
CN103927250B (zh) 一种终端设备用户姿态检测方法
US20220180485A1 (en) Image Processing Method and Electronic Device
WO2021078001A1 (zh) 一种图像增强方法及装置
EP3816932B1 (en) Skin detection method and electronic device
JP2010004118A (ja) デジタルフォトフレーム、情報処理システム、制御方法、プログラム及び情報記憶媒体
CN112799508B (zh) 显示方法与装置、电子设备及存储介质
KR102548317B1 (ko) 색소 검출 방법 및 전자 장치
WO2022001806A1 (zh) 图像变换方法和装置
US11810277B2 (en) Image acquisition method, apparatus, and terminal
WO2020015144A1 (zh) 一种拍照方法及电子设备
WO2020015149A1 (zh) 一种皱纹检测方法及电子设备
US11521575B2 (en) Electronic device, electronic device control method, and medium
CN113572956A (zh) 一种对焦的方法及相关设备
US10009545B2 (en) Image processing apparatus and method of operating the same
WO2022052786A1 (zh) 皮肤敏感度的显示方法、装置、电子设备及可读存储介质
CN109117819B (zh) 目标物识别方法、装置、存储介质及穿戴式设备
WO2023011348A1 (zh) 检测方法及电子设备
WO2023011302A1 (zh) 拍摄方法及相关装置
WO2022017270A1 (zh) 外表分析的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762547

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22762547

Country of ref document: EP

Kind code of ref document: A1