WO2022052786A1 - 皮肤敏感度的显示方法、装置、电子设备及可读存储介质 - Google Patents

皮肤敏感度的显示方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2022052786A1
WO2022052786A1 PCT/CN2021/113689 CN2021113689W WO2022052786A1 WO 2022052786 A1 WO2022052786 A1 WO 2022052786A1 CN 2021113689 W CN2021113689 W CN 2021113689W WO 2022052786 A1 WO2022052786 A1 WO 2022052786A1
Authority
WO
WIPO (PCT)
Prior art keywords
skin
original image
sensitivity
image
electronic device
Prior art date
Application number
PCT/CN2021/113689
Other languages
English (en)
French (fr)
Inventor
黄伟
郜文美
董辰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022052786A1 publication Critical patent/WO2022052786A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present application belongs to the technical field of data collection, and in particular relates to a display method, device, electronic device and readable storage medium for skin sensitivity.
  • the skin is the organ with the largest area in the human body, and the state of the skin is particularly important for the human body.
  • the skin state can be determined by a number of indicators, such as skin sensitivity.
  • Skin sensitivity mainly refers to the severity of the skin's response to external stimuli. The more sensitive the skin, the more responsive it is to external stimuli. How to effectively and quickly let users know their skin sensitivity has become an urgent problem to be solved.
  • Embodiments of the present application provide a skin sensitivity display method, device, electronic device, and readable storage medium, which can improve measurement accuracy and reduce measurement costs.
  • an embodiment of the present application provides a method for displaying skin sensitivity, which is applied to an electronic device, including:
  • the pixel value of each pixel in the skin area image in the original image is adjusted according to the skin sensitivity, and the original image after adjusting the pixel value is used as a schematic diagram of the sensitivity distribution, and the schematic diagram of the sensitivity distribution is displayed.
  • the distance value between the electronic device and the target object can be determined according to the distance measurement reference parameter.
  • the electronic device only needs to include a camera module to achieve this.
  • the electronic device does not rely on the depth image, or needs to measure the distance through the difference of the shooting angle of the binocular camera, so the electronic device does not need to configure the optical pulse-based transceiver and binocular camera and other modules, thus greatly reducing the cost of the electronic device. cost; at the same time, since the distance measurement is performed by determining one or more distance measurement reference parameters during the process of distance measurement, the distance value is not directly obtained, so the accuracy of the distance measurement can be improved.
  • the determining the skin sensitivity corresponding to each pixel in the skin area image includes:
  • the first pixel value and the second pixel value are imported into a preset sensitivity conversion algorithm to obtain the skin sensitivity corresponding to the pixel point.
  • the sensitivity conversion algorithm is specifically:
  • f is the skin sensitivity
  • r is the first pixel value
  • g is the second pixel value
  • alpha and beta are preset adjustment coefficients
  • exp(x) is an exponential function.
  • the acquiring an original image containing the photographed object includes:
  • the displaying a schematic diagram of the sensitivity distribution includes:
  • the extracting each video image frame in the video data as the original image includes:
  • image analysis is performed on the original image in sequence to determine whether the original image contains a skin area
  • the original image includes a skin area, performing an operation of extracting the image of the skin area of the photographed subject from the original image;
  • the original image corresponding to the next frame number is analyzed, and the operation of judging whether the original image contains a skin area is performed.
  • the adjusting the pixel value of each of the pixel points in the skin area in the original image according to the skin sensitivity includes:
  • the acquiring an original image containing the photographed object includes:
  • an embodiment of the present application provides a display device for skin sensitivity, including:
  • an original image acquisition unit configured to acquire an original image containing a subject, and extract a skin area image of the subject from the original image
  • a skin sensitivity determination unit configured to determine the skin sensitivity corresponding to each pixel in the skin area image
  • a sensitivity distribution schematic display unit configured to adjust the pixel value of each of the pixel points in the skin area image in the original image according to the skin sensitivity, and take the original image after adjusting the pixel value as the sensitivity distribution schematic diagram, A schematic diagram of the sensitivity distribution is shown.
  • embodiments of the present application provide an electronic device, a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the A computer program implements the method for displaying skin sensitivity according to any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, wherein, when the computer program is executed by a processor, any one of the above-mentioned first aspect is implemented.
  • a method of displaying the skin sensitivity is implemented.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on an electronic device, enables the electronic device to execute the method for displaying skin sensitivity according to any one of the above-mentioned first aspect.
  • an embodiment of the present application provides a chip system, including a processor, where the processor is coupled to a memory, and the processor executes a computer program stored in the memory, so as to realize the sensitive skin according to any one of the first aspect degree display method.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • FIG. 3 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an output screen of a smart cosmetic mirror provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of an implementation of a method for displaying skin sensitivity provided by an embodiment of the present application
  • FIG. 6 is a schematic diagram of a scene in which an electronic device obtains an original image provided by an embodiment of the present application
  • Fig. 7 is a selection interface diagram of an original image provided by an embodiment of the present application.
  • FIG. 8 is a specific implementation flowchart of S5012 provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of extracting a skin area image provided by an embodiment of the present application.
  • FIG. 10 is a specific implementation flowchart of S502 provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a sensitivity distribution provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a sensitivity distribution provided by another embodiment of the present application.
  • FIG. 13 is a specific implementation flowchart of adjusting the pixel value of each of the pixel points in the skin area in the original image according to the skin sensitivity in S503 provided by an embodiment of the present application;
  • FIG. 14 is a structural block diagram of a display device for skin sensitivity provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the skin sensitivity display method provided by the embodiments of the present application can be applied to mobile phones, tablet computers, wearable devices, in-vehicle devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, super On electronic devices such as a mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), and a smart beauty mirror, the embodiments of the present application do not impose any restrictions on the specific type of the electronic device.
  • a mobile personal computer ultra-mobile personal computer, UMPC
  • netbook a personal digital assistant
  • PDA personal digital assistant
  • smart beauty mirror the embodiments of the present application do not impose any restrictions on the specific type of the electronic device.
  • the electronic device may be a station (STAION, ST) in a WLAN, a cellular phone, a cordless phone, a Session Initiation Protocol (Session Initiation Protocol, SIP) phone, a Wireless Local Loop (WLL) station, Personal Digital Assistant (PDA) devices, handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, computers, laptop computers, handheld communication devices, handheld computing devices, and /or other devices for communicating on wireless systems and next-generation communication systems, for example, mobile terminals in a 5G network or mobile terminals in a future evolved Public Land Mobile Network (PLMN) network, etc.
  • STAION Session Initiation Protocol
  • SIP Session Initiation Protocol
  • WLL Wireless Local Loop
  • PDA Personal Digital Assistant
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • emitting diode, AMOLED organic light-emitting diode
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • Display 194 may include a touch panel as well as other input devices.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M to realize the heart rate detection function.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • FIG. 2 shows a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • the electronic device is specifically a smart mirror, for example, the acquired picture can be processed in real time through a preset algorithm and displayed on the mirror surface.
  • the smart mirror includes at least a camera module 201 , a display module 202 and a data processing device 203 .
  • the camera module 201 can be used to obtain a picture containing the photographed object, and display the obtained picture through the display module 202. If the photographed picture needs to be processed, the picture can be adjusted through the data processing device 203, and then Output through the display module 202 .
  • the smart mirror may further include a fill light 204, and the electronic device can turn on the fill light 204 to perform a fill light operation when detecting that the current scene ambient light intensity is low, thereby improving the overall brightness of the shooting picture.
  • the fill light 204 is loaded with a polarizer, and by controlling the angle of the polarizer, the fill light emits illumination light of a predetermined color, for example, the fill light 204 can be illuminated with red light and green light to obtain the object to be photographed. Original image illuminated by red and green light.
  • the software system of the electronic device may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 as an example.
  • FIG. 3 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device. For example, the management of call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, for example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or video.
  • Mode 1 skin detection through a skin detection device.
  • the above-mentioned skin detection equipment may specifically be a medical skin detection device. Due to factors such as counterfeiting costs and equipment volume, the skin detection device is often purchased by medical institutions. When users need to use the skin detection device for skin detection, they generally need to Go to the corresponding medical institution for skin test, and the output skin test report mainly gives the index values of skin test, such as melanin content, acne density, roughness, tissue oxygen content, skin type classification (such as oily skin or dry skin) sexual skin), etc., users can obtain their overall skin status through the skin test report. However, the above method is difficult to detect.
  • the above-mentioned skin detection device cannot know the local skin state, and the display effect of the report is poor.
  • the data processing time of the above-mentioned detection device is relatively long, and the real-time performance is relatively low. The user needs to wait for a long time to obtain the detection report, and the report acquisition efficiency is low.
  • Method 2 Perform skin detection through a smart beauty mirror.
  • the existing smart beauty mirrors generally only have the functions of skin beautification and whitening, that is, beautify the face and portrait, but do not have the function of skin detection.
  • the freckles and dark circles on the face can be identified through the intelligent recognition algorithm, and the above flawed areas can be beautified, blurred, and whitened.
  • FIG. 4 shows a schematic diagram of an output screen of a smart beauty mirror provided by an embodiment of the present application. As shown in Figure 4, the smart beauty mirror can identify the flawed area of the face and mark it so that the user can understand the current skin condition.
  • the existing smart beauty mirror cannot determine the skin sensitivity, and there are few skin detection items, which cannot meet the skin detection needs of existing users.
  • the present application provides a method for displaying skin sensitivity, the details of which are as follows:
  • the execution subject of the method for displaying skin sensitivity is an electronic device, and the The electronic device can be a smart phone, a tablet computer, a computer, a smart game console, or any device equipped with a camera module.
  • the electronic device can be a smart mirror.
  • Fig. 5 shows the implementation flow chart of the display method of skin sensitivity provided by an embodiment of the present application, which is described in detail as follows:
  • an original image containing a photographed subject is acquired, and an image of a skin area of the photographed subject is extracted from the original image.
  • the electronic device may have a built-in camera module, and the camera module can acquire an original image including the object to be photographed, and the object can move the part to be detected into the shooting area of the electronic device, so that the original image obtained by shooting The skin area that contains the desired detection site.
  • FIG. 6 shows a schematic diagram of a scene in which an electronic device acquires an original image according to an embodiment of the present application.
  • the electronic device is a smart mirror.
  • the smart mirror can be placed on the desktop. When the user is in a sitting position, the smart mirror can capture the face of the user.
  • the shooting object is the user, and
  • the skin area to be detected is the skin area of the face.
  • the electronic device may receive the original image fed back by the external camera module.
  • the electronic device can establish a communication connection with the external camera module through a wired interface or a wireless communication module, and receive the original image including the shooting object collected by the camera module.
  • the camera module may be a camera module specially used for skin detection, such as a camera configured with a variety of different light-emitting devices, which can emit violet light, red light, ultraviolet light, and infrared light through different light-emitting devices.
  • the camera module can acquire the original images of the shooting object under different light irradiation, and feed back the original images obtained by shooting to the electronic device through the communication link with the electronic device.
  • Some skin detection methods such as skin sensitivity detection or acne density, may require the subject to collect original images under specific light irradiation.
  • the above camera modules can be configured with multiple different light-emitting devices according to the needs of the detection items. , so that when the corresponding skin detection item is performed, the associated light-emitting device is turned on, so that the subject is irradiated under the corresponding light to obtain the original image.
  • the electronic device may select a photographed image from a gallery as the above-mentioned original image.
  • FIG. 7 shows an interface diagram for selecting an original image provided by an embodiment of the present application. Referring to (a) in FIG. 7 , when detecting that the user clicks on the gallery control 701, the electronic device may enter the gallery display interface, as shown in (b) in FIG. 7 . Images that have been photographed or obtained from other devices can be displayed in the gallery display interface of the electronic device.
  • the existing image is used as the For the target image, the skin sensitivity display process is performed on the target image, and the target image is taken as the above-mentioned original image, and the operations of S501 to S503 are performed.
  • the above-mentioned preset selection operation can be a long-press operation. If it is detected that the user performs a long-press operation on any existing image, the control 702 in (c) in FIG. 7 will pop up, and the control 702 will pop up.
  • it is an editing menu that can be executed on an existing image, and the editing menu contains an item of "skin sensitivity detection", that is, a control 703.
  • the electronic device detects that the user clicks on the control 703, the display of skin sensitivity is executed.
  • the electronic device can display a preview page of the existing image, as shown in (d) in FIG. 7 , and the preview page contains multiple editing controls, wherein It includes a control 704 for skin sensitivity detection.
  • the electronic device detects that the user clicks on the control 704, the electronic device can perform the display process of the skin sensitive area, and output a schematic diagram of the sensitivity distribution corresponding to the existing image previewed by the user.
  • the electronic device may be configured with multiple display modes, and the above display modes include, but are not limited to: a normal display mode and a sensitivity detection mode. Among them, if the electronic device is in the normal display mode, it can directly display and output the original image, that is, no other processing is required for the original image; while in the sensitivity detection mode, it means that the user needs to check the sensitivity of his own skin. Then, the operations of S501 to S503 are performed to display the sensitive distribution image corresponding to the original image.
  • the above display modes include, but are not limited to: a normal display mode and a sensitivity detection mode. Among them, if the electronic device is in the normal display mode, it can directly display and output the original image, that is, no other processing is required for the original image; while in the sensitivity detection mode, it means that the user needs to check the sensitivity of his own skin. Then, the operations of S501 to S503 are performed to display the sensitive distribution image corresponding to the original image.
  • the electronic device can acquire real-time video data including the shooting object, and the real-time real-time video data is specifically the data captured by the camera module at a preset frame rate, each frame corresponds to an image, and the electronic device
  • Each video image frame can be extracted from the video data, and the display process of skin sensitivity can be performed on each video image frame, that is, the operations of S501 to S503 are performed, and the sensitivity distribution diagram corresponding to each video image frame is outputted respectively, so as to realize real-time dynamic observation.
  • the skin sensitivity of the subject is improved, and the display effect and real-time performance of skin sensitivity are improved.
  • S501 may specifically include S5011 to S5012, which are described in detail as follows:
  • the electronic device is configured with a camera module, the camera module can acquire video data, the camera module can acquire the video data of the shooting object in real time through the camera module, and transmit the real-time acquired video data to the processor of the electronic device Processing, such as skin sensitivity display processing, is performed.
  • the camera module since the camera module acquires the video data of the shooting object in real time, after each frame of video image is acquired, the camera module will transmit the newly acquired video data to the processor of the electronic device for processing, that is, the above During the process of real-time acquisition of video data, the acquired video data will also be processed in subsequent steps in real time, and the processed video data will be displayed on the display module, so as to realize the purpose of dynamically checking skin sensitivity in real time.
  • the camera module will generate a video data stream, encapsulate the video data collected in real time into data packets of the corresponding format, and transmit the data packets of each collected video data in real time through the video data stream, so that Real-time processing of video data.
  • the electronic device can parse the video data, extract each video image frame contained in the video data, and use the obtained video image frame as the above-mentioned original image to generate each video image frame.
  • the electronic device can acquire the video data of the target object in real time, and perform skin sensitivity display processing on each video image frame in the video data respectively, and obtain a schematic diagram of the sensitivity distribution corresponding to each video image frame, so as to The user can view the skin sensitivity status in real time, which improves the display effect of skin sensitivity and the real-time viewing, and improves the user experience.
  • FIG. 8 is a specific implementation flowchart of S5012 provided by an embodiment of the present application.
  • S5012 in this embodiment of the present application specifically includes S801 to S803, which are specifically described as follows:
  • the extracting each video image frame in the video data as the original image includes:
  • image analysis is sequentially performed on the original image to determine whether the original image includes a skin area.
  • the electronic device when the electronic device acquires the video data of the shooting object in real time, it associates a frame number with each video image frame according to the sequence of shooting time, and sequentially processes each video image frame based on the frame number.
  • the original image showing the schematic diagram of the sensitivity distribution must contain the image of the skin area, if the original image does not contain the skin area, the original image does not need to be processed; on the contrary, if the original image does not contain the skin area If the skin area is included, the area image corresponding to the skin area needs to be extracted from the original image, and the operations of S502 and S503 are performed.
  • the electronic device may be configured with a skin recognition algorithm, the electronic device imports the original image into the skin recognition algorithm, and can output a recognition result corresponding to the original image, where the recognition result includes: A first result and a second result that does not contain areas of skin.
  • the electronic device may determine to perform the operation of S802 or S803 according to the above identification result.
  • the above-mentioned skin recognition algorithm is specifically a convolutional neural network.
  • the electronic device can sequentially import each original image into the above-mentioned convolutional neural network based on the above-mentioned frame number, and the convolutional neural network can perform multiple convolution operations on the above-mentioned original image through multiple built-in cascaded convolution kernels, Obtain N layers of convolution vectors, where N is the number of convolution layers contained in the above-mentioned convolutional neural network, each convolution layer corresponds to one of the above-mentioned convolution kernels, and the convolution vector output by the Nth layer is imported into The corresponding fully connected layer identifies whether the original image contains a skin area, and obtains the identification result.
  • the electronic device performs feature extraction on the original image through multiple convolution layers to determine whether the original image contains feature information associated with the skin, thereby identifying whether the skin region is included.
  • the color of the skin is within a specific range, and the skin area has the characteristics of a large area and a flat screen.
  • the electronic device can judge whether the pixel value of the above-mentioned pixel point is within the above-mentioned preset interval according to the pixel value of each pixel in the original image, and if so, count the number of pixels within the above-mentioned interval. and whether the above-mentioned plurality of pixel points within the interval are adjacent, if the adjacent pixels are greater than the preset number threshold, it is identified that the original image contains a skin area.
  • the electronic device may acquire the ambient light intensity when shooting the original image, perform light intensity compensation on each pixel value in the original image based on the ambient light intensity, and perform light intensity compensation according to the adjusted original image.
  • the identification operation of the above area range avoids the color shift of the pixel values in the original image due to the ambient light intensity being too bright or too dark, so that the pixel value of the skin area deviates from the above range, so as to improve the accuracy of skin area identification.
  • the skin area image associated with the skin area can be extracted from the original image.
  • the electronic device when the electronic device detects that the original image corresponding to the current frame number does not contain a skin area, it does not need to generate a schematic diagram of the sensitivity distribution corresponding to the original image, and obtains the original image of the next frame number, and executes the skin area. recognition operation.
  • the original image corresponding to the current frame number does not contain a skin area, the original image is directly displayed.
  • the conversion operation of the sensitivity distribution schematic diagram is not performed on the original image that does not contain the skin area;
  • the original image of the region is processed, thereby reducing unnecessary processing operations and improving the conversion efficiency.
  • the electronic device can identify the area covered by the original image of the subject's skin, that is, the above-mentioned skin area image, and extract the skin area image from the original image.
  • the main identification object is the skin area of the photographed subject, and no sensitivity identification is required for other areas of the skin that are not the photographed subject.
  • the way for the electronic device to determine the skin area image may be: extracting multiple contour curves contained in the original image through a contour recognition algorithm, and determining each image based on the image area enclosed by each contour curve
  • the image feature value of the region, the image feature value includes but is not limited to at least one of the following: average pixel value, number of pixels, mean square error of pixel values, and the like. If the image characteristic value satisfies the preset skin characteristic condition, the image area enclosed by the contour curve is identified as the skin area, and the skin area image is extracted from the original image.
  • the way for the electronic device to determine the image of the skin area may also be: performing a sliding frame on the original image by using an image frame of an initial size, and identifying the area currently covered by the image frame during the sliding frame selection process Whether the skin area is included, if included, mark the area. After the sliding frame is completed, an area containing multiple markers is obtained. At this time, the above-mentioned image frame is reduced based on the preset adjustment step, and the multiple markers are The above-mentioned frame identification operation is performed on the area of the image frame until the size of the image frame is smaller than the preset lower limit threshold. At this time, the last marked area is the skin area image.
  • FIG. 9 shows a schematic diagram of extracting a skin area image provided by an embodiment of the present application.
  • the original image is specifically an image containing a hand area
  • the contour information corresponding to the skin area image contained in the original image is determined by a preset skin area recognition algorithm, that is, FIG. 9 (b) in Fig. 9; and extract the skin area image from the original image based on the contour information, that is, (c) in Fig. 9.
  • the skin sensitivity corresponding to each pixel in the skin area image is determined.
  • the electronic device may be configured with a skin sensitivity conversion algorithm, import each pixel in the skin area image in the original image into the above conversion algorithm, and calculate the skin sensitivity corresponding to each pixel.
  • the sensitive area of the skin will have redness in clinical external skin manifestations.
  • Skin sensitivity is generally caused by internal and external factors such as drugs, topically applied agents, or irritants from environmental ingredients. Understanding the physiologic response of the skin to these stimuli becomes increasingly important in skin testing, medical research, and clinical drug research.
  • Skin tissue activity testing is about understanding how the microvascular network of skin tissue contributes to vasodilation and increased blood flow. The ability to reduce the vascular occlusion response of blood flow, thereby understanding the allergic hardening, inflammatory process and irritation of the skin to irritants.
  • skin sensitivity can be divided into the following three types:
  • Sensitive skin refers to a normal state of the skin, which refers to the skin with high sensitivity, weak resistance to the outside world, and obvious response to stimulation. For example, under the influence of the external environment, the wrong use of cosmetics, and the destruction of the stratum corneum of the skin caused by excessive cleaning, etc., the barrier function of the skin will be insufficient, the skin hydration will be low, and the skin will be thin and fragile, and the subcutaneous capillaries will expand.
  • Skin irritation Also known as irritant contact dermatitis, it represents a phenomenon that the skin rapidly produces redness, swelling, heat, pain, itching and other phenomena after being stimulated by a chemical irritant at or above the critical concentration. For example, exposure to too many stimuli in a short period of time can cause epidermal damage and barrier breakdown, trigger cellular stress responses and immune responses, and activate inflammation.
  • Skin allergies Also known as allergic contact dermatitis, it represents a phenomenon that refers to the phenomenon of redness, swelling, heat, pain, itching and other phenomena after the skin is affected by allergens in a short time. Since the first contact of the antigen with the skin induces sensitized immunity, it takes a period of time for T cells to react; Skin allergies are caused by abnormal immune responses triggered by chemicals, and low-molecular-weight chemicals (haptens) are the main cause of skin allergies, which can also cause subcutaneous telangiectasias.
  • haptens low-molecular-weight chemicals
  • any of the above-mentioned skin sensitivity phenomena will be accompanied by subcutaneous telangiectasia, that is, the phenomenon of redness on the skin surface. Therefore, by judging the redness of the user's skin surface, the skin sensitivity of the subject can be determined.
  • the terminal device can identify the image format of the skin area image, determine whether the image format is an image in RGB format, and if so, execute the operation of S502; otherwise, the skin area image can be converted into RGB through an image format conversion algorithm format, and perform the operation of S502 based on the converted skin area image.
  • the electronic device may determine the skin sensitivity corresponding to each pixel point in the following way: the electronic device may store a mapping relationship between the skin sensitivity and the pixel value, and the electronic device may The value is imported into the above mapping relationship, so that the skin sensitivity corresponding to the pixel can be calculated.
  • the above-mentioned skin sensitivity can specifically be used to characterize the type of sensitivity, that is, skin allergy, skin irritation, or sensitive skin.
  • the electronic device configures corresponding sensitivity values for different skin sensitivity types. If a sensitivity type associated with any pixel is detected, the sensitivity value associated with the sensitivity type is used as the skin sensitivity of the pixel.
  • FIG. 10 shows a specific implementation flowchart of S502 provided by an embodiment of the present application.
  • S502 in this embodiment specifically includes S5021 to S5022 , which are described in detail as follows:
  • the determining the skin sensitivity corresponding to each pixel in the skin area image includes:
  • a first pixel value corresponding to the red channel and a second pixel value corresponding to the green channel of the pixel point in the original image are acquired.
  • the red blood cells of the capillaries in the skin tissue have the characteristic of strongly absorbing green light, and absorb little red light, while in contrast, the subcutaneous dermis tissue absorbs less light of various wavelengths, so it is possible to achieve Visible light detection of heme; and sensitive skin tends to appear redness, and redness is caused by the high number of red blood cells in capillaries.
  • the electronic device can capture an image of the subject's skin area under illumination light containing green and red light (such as white light or red-green polarized light, etc.), and based on the pixel values of the green channel and each skin area in the red channel , to determine skin sensitivity.
  • the original image is specifically an RGB image, that is, each pixel in the original image corresponds to a pixel value in each channel, which are the first pixel value of the red channel, the second pixel value of the green channel, and The third pixel value for the blue channel. Since it is necessary to determine the degree of absorption of green light and red light by the skin corresponding to each pixel in the skin area image, the first pixel value of the red channel and the second pixel value of the green channel are obtained.
  • the first pixel value and the second pixel value are imported into a preset sensitivity conversion algorithm to obtain the skin sensitivity corresponding to the pixel point.
  • the electronic device may import the acquired first pixel value and the second pixel value into the sensitivity conversion algorithm to determine the skin sensitivity of the skin area corresponding to the pixel point.
  • the first pixel value corresponding to the pixel is larger, that is, there is a positive correlation between the skin sensitivity and the first pixel value;
  • the higher the absorption degree of green light by the corresponding skin area the smaller the second pixel value corresponding to the pixel point, that is, the skin sensitivity and the second pixel value are negatively correlated.
  • the above-mentioned sensitivity conversion algorithm may be a hash conversion function, and the first pixel value and the second pixel value are imported into the above-mentioned hash conversion function, and the skin corresponding to the pixel can be obtained. sensitivity.
  • the above sensitivity conversion algorithm may also be obtained after training a neural network based on multiple training images.
  • the electronic device can use the red layers and green layers in multiple training images as the input of the neural network, and use the sensitivity training distribution map corresponding to the training images as the output of the neural network, and train the above-mentioned neural network, so as to
  • the neural network is used as the above sensitivity conversion algorithm, and the first pixel value and the second pixel value corresponding to each pixel point are imported into the sensitivity conversion algorithm of the merchant, so as to calculate the skin sensitivity corresponding to each pixel point.
  • f is the skin sensitivity
  • r is the first pixel value
  • g is the second pixel value
  • alpha and beta are preset adjustment coefficients
  • exp(x) is an exponential function.
  • the electronic device can determine, according to the difference between the first pixel value of the red channel and the second pixel value of the green channel, the absorption deviation between the two for red light and green light, so as to determine The concentration of red blood cells in the skin area corresponding to the pixel point, and the skin sensitivity of the skin area is determined based on the concentration of red blood cells, thereby improving the accuracy of the skin sensitivity.
  • the first pixel value of the red channel and the second pixel value of the green channel are determined according to the characteristics of red blood cells absorbing light of different colors, so as to obtain the skin sensitivity corresponding to the pixel point, reducing the skin sensitivity. While obtaining the difficulty, it can also ensure the accuracy of skin sensitivity.
  • the pixel value of each pixel in the skin area image in the original image is adjusted according to the skin sensitivity, and the original image after adjusting the pixel value is used as a schematic diagram of the sensitivity distribution, and the sensitivity is displayed. Distribution diagram.
  • the electronic device in order to allow the user to intuitively determine the skin sensitivity of each part of the skin area, can change the pixel value of each pixel in the skin area in the original image to obtain the distribution of the skin sensitivity, instead of displaying the Sensitivity value to improve display effect. Based on this, the electronic device can adjust the pixel value of each pixel in the skin area according to the skin sensitivity, and use the original image after adjusting the pixel value as a sensitivity distribution diagram representing the skin sensitivity distribution of the skin area.
  • the electronic device may identify other area images except the skin area image as the background area image, and adjust the pixel value of each pixel in the background area image to a preset value, for example, the background area image All the pixels within are set to black, so as to prevent the background of the picture from affecting the display effect of the sensitivity distribution of the skin area.
  • FIG. 11 shows a schematic diagram of the sensitivity distribution provided by an embodiment of the present application. As shown in Figure 11, in the sensitivity distribution schematic diagram, each pixel in the background area is set with a uniform pixel value to highlight the sensitivity distribution of the skin area, and a comparison table between the corresponding sensitivity and pixel value is configured , so that the user can determine the skin sensitivity corresponding to different pixel values.
  • the electronic device may crop the skin area image from the original image, adjust the pixel value of each pixel in the skin area image, and use the adjusted pixel value of the skin area image as the above-mentioned schematic diagram of the sensitivity distribution to display.
  • FIG. 12 shows a schematic diagram of the sensitivity distribution provided by another embodiment of the present application. As shown in FIG. 12 , the sensitivity distribution diagram only includes the skin area and does not include the background area, and a comparison table between the corresponding sensitivity and pixel value is configured, so that the user can determine the skin sensitivity corresponding to different pixel values.
  • FIG. 13 shows the pixel value of each pixel in the skin area in the original image adjusted according to the skin sensitivity in S503 provided by an embodiment of the present application
  • the specific implementation flow chart Referring to FIG. 13 , compared with the embodiment shown in FIG. 5 , S503 in this embodiment specifically includes S5031 to S5032, which are described in detail as follows:
  • the adjusting the pixel value of each of the pixel points in the skin area in the original image according to the skin sensitivity includes:
  • the adjusted pixel value associated with the skin sensitivity corresponding to each pixel point is determined.
  • the electronic device may establish a pixel mapping relationship, and the pixel mapping relationship specifically defines the pixel values corresponding to each skin sensitivity in the sensitivity distribution schematic diagram.
  • the electronic device may query the pixel values associated with each pixel in the above-mentioned pixel mapping relationship, and use the pixel values obtained by the query as the above-mentioned adjusted pixel values.
  • the above-mentioned pixel mapping relationship may specifically be a conversion algorithm, and the skin sensitivity of the pixel is imported into the above-mentioned conversion algorithm to determine the adjusted pixel value corresponding to the skin sensitivity.
  • the above-mentioned adjusted pixel value may specifically include pixel values corresponding to multiple different channels. If the original image is an RGB image, the above-mentioned adjusted pixel values specifically include pixel values of three channels of red channel, green channel and blue channel; if the original image is a printing color mode CMYK image, the above-mentioned adjusted pixel value specifically Contains the pixel values of the cyan, magenta, yellow and black channels.
  • the original pixel value of the pixel in the original image is replaced with the adjusted pixel value.
  • the electronic device may use the adjusted pixel value determined by the pixel point in each skin area image to replace the pixel value in the original image, so as to realize the purpose of adjusting the pixel value in the original image.
  • the pixel value of each pixel point in the original image is set based on the adjusted pixel value, so as to realize the automatic generation of the sensitivity distribution schematic diagram, Improved generation efficiency.
  • generating and displaying a schematic diagram of the sensitivity distribution may specifically be: based on The frame number of each of the original images, and the schematic diagram of the sensitivity distribution associated with each of the original images is displayed in sequence.
  • the electronic device sequentially determines the skin sensitivity of each pixel in the skin area of each original image according to the frame number, and adjusts the pixel value of each pixel in the original image based on the skin sensitivity to generate the original image.
  • Schematic diagram of the sensitivity distribution of the image Based on this, the electronic device can also sequentially display the sensitivity distribution diagram corresponding to each original image based on the frame number corresponding to each original image, so as to realize dynamic display of skin sensitivity and improve display effect.
  • the method for displaying skin sensitivity can obtain an original image including a photographed object, determine a skin area image from the original image, and determine the skin area image from the original image, according to the pixel value of each pixel in the skin area image , get the skin sensitivity associated with each pixel point, and adjust the pixel value of the corresponding pixel point in the original image according to the determined skin sensitivity, so as to generate a sensitivity distribution diagram, so that the user can understand the overall skin in the sensitivity distribution diagram At the same time of sensitivity, the corresponding sensitivity of each local skin can also be determined.
  • the generated sensitivity distribution diagram is generated based on the original image by adjusting the pixel values of the pixel points. Therefore, the contour of the skin area is consistent with the original image.
  • the sensitivity corresponding to each local area can be determined, which improves the display effect; on the other hand, the generation process of the sensitivity distribution schematic diagram can be completed only by the electronic device including the camera module, and the user does not need to go to a specific medical institution to complete it. , which greatly improves the convenience of skin sensitivity acquisition, reduces the difficulty of acquisition, and improves the efficiency of skin detection.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 14 shows a block diagram of the structure of the skin sensitivity display device provided by the embodiments of the present application. relevant part.
  • the skin sensitivity display device includes:
  • An original image acquisition unit 141 configured to acquire an original image containing a photographed subject, and extract a skin area image of the photographed subject from the original image;
  • a skin sensitivity determination unit 142 configured to determine the skin sensitivity corresponding to each pixel in the skin area image
  • the sensitivity distribution schematic display unit 143 is configured to adjust the pixel value of each pixel in the skin area in the original image according to the skin sensitivity, and generate and display a sensitivity distribution schematic.
  • the skin sensitivity determination unit 142 includes:
  • a pixel value acquisition unit configured to acquire the first pixel value corresponding to the red channel of the original image and the second pixel value corresponding to the green channel of the pixel point;
  • a skin sensitivity conversion unit configured to import the first pixel value and the second pixel value into a preset sensitivity conversion algorithm to obtain the skin sensitivity corresponding to the pixel point.
  • the sensitivity conversion algorithm is specifically:
  • f is the skin sensitivity
  • r is the first pixel value
  • g is the second pixel value
  • alpha and beta are preset adjustment coefficients
  • exp(x) is an exponential function.
  • the original image acquisition unit 141 includes:
  • a video data acquisition unit used for acquiring real-time collected video data
  • a video image frame extraction unit configured to extract each video image frame in the video data as the original image
  • the sensitivity distribution schematic display unit 143 is specifically used for:
  • the video image frame extraction unit includes:
  • a skin area identification unit configured to perform image analysis on the original image in turn based on the frame number, and determine whether the original image contains a skin area
  • a first operation unit configured to perform an operation of extracting a skin area image of the photographed subject from the original image if the original image includes a skin area
  • the second operation unit is configured to analyze the original image corresponding to the next frame number if the original image does not contain a skin area, and perform the operation of judging whether the original image contains a skin area.
  • the sensitivity distribution schematic display unit 143 includes:
  • an adjustment pixel value query unit configured to determine the adjusted pixel value associated with the skin sensitivity corresponding to each pixel point based on a preset pixel mapping relationship
  • An adjusted pixel value replacement unit configured to replace the original pixel value of the pixel point in the original image with the adjusted pixel value.
  • the original image acquisition unit 141 includes:
  • the polarized light photographing unit is configured to turn on a supplementary light module equipped with a polarizer if it is in a sensitivity detection mode, and acquire the original image of the photographed object under illumination by the supplementary light module.
  • the skin sensitivity display device can also obtain the original image including the photographed object, determine the skin area image from the original image, and obtain each pixel value according to the pixel value of each pixel in the skin area image.
  • the skin sensitivity associated with the pixel points and adjust the pixel value of the corresponding pixel point in the original image according to the determined skin sensitivity, so as to generate a sensitivity distribution diagram, so that the user can understand the overall skin sensitivity in the sensitivity distribution diagram.
  • the generated sensitivity distribution diagram is generated based on the original image by adjusting the pixel values of the pixel points.
  • the contour of the skin area is consistent with the original image.
  • the sensitivity corresponding to each local area can be determined, which improves the display effect; on the other hand, the generation process of the sensitivity distribution schematic diagram can be completed only by the electronic device including the camera module, and the user does not need to go to a specific medical institution to complete it. , which greatly improves the convenience of skin sensitivity acquisition, reduces the difficulty of acquisition, and improves the efficiency of skin detection.
  • FIG. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 15 of this embodiment includes: at least one processor 150 (only one is shown in FIG. 15 ), a processor, a memory 151 , and a processor 151 stored in the memory 151 and available for processing in the at least one processor.
  • the computer program 152 running on the processor 150 when the processor 150 executes the computer program 152, implements the steps in any of the foregoing embodiments of the method for displaying skin sensitivity.
  • the electronic device 15 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the electronic device may include, but is not limited to, the processor 150 and the memory 151 .
  • FIG. 15 is only an example of the electronic device 15, and does not constitute a limitation on the electronic device 15. It may include more or less components than the one shown, or combine some components, or different components , for example, may also include input and output devices, network access devices, and the like.
  • the so-called processor 150 may be a central processing unit (Central Processing Unit, CPU), and the processor 150 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuits) , ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 151 may be an internal storage unit of the electronic device 15 in some embodiments, such as a hard disk or a memory of the electronic device 15 . In other embodiments, the memory 151 may also be an external storage device of the electronic device 15, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc. Further, the memory 151 may also include both an internal storage unit of the electronic device 15 and an external storage device.
  • the memory 151 is used to store an operating system, an application program, a boot loader (Boot Loader), data, and other programs, such as program codes of the computer program. The memory 151 may also be used to temporarily store data that has been output or will be output.
  • An embodiment of the present application further provides an electronic device, the electronic device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor executing The computer program implements the steps in any of the foregoing method embodiments.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • all or part of the processes in the methods of the above embodiments can be implemented by a computer program to instruct the relevant hardware.
  • the computer program can be stored in a computer-readable storage medium, and the computer program When executed by the processor, the steps of the above-mentioned various method embodiments may be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the photographing device/electronic device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunication signals
  • software distribution media For example, U disk, mobile hard disk, disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/network device and method may be implemented in other manners.
  • the apparatus/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Abstract

本申请适用于图像处理技术领域,提供了皮肤敏感度的显示方法、装置、电子设备及可读存储介质,该方法包括:获取包含拍摄对象的原始图像,并从所述原始图像中提取所述拍摄对象的皮肤区域图像;确定所述皮肤区域图像内各个像素点对应的皮肤敏感度;根据所述皮肤敏感度调整所述原始图像中所述皮肤区域图像内各个所述像素点的像素值,将调整像素值后的原始图像作为敏感度分布示意图,显示所述敏感度分布示意图。本申请提供的技术方案是基于原始图像通过调整像素点的像素值后生成的,因此,皮肤区域的轮廓与原始图像是一致的,通过查看敏感度分布示意图可以确定各个局部区域对应的敏感度,提高了显示效果。

Description

皮肤敏感度的显示方法、装置、电子设备及可读存储介质
本申请要求于2020年09月09日提交国家知识产权局、申请号为202010944311.9、申请名称为“皮肤敏感度的显示方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于数据采集技术领域,尤其涉及皮肤敏感度的显示方法、装置、电子设备及可读存储介质。
背景技术
皮肤作为人体所占面积最大的器官,皮肤状态对于人体而言显得尤为重要,可以通过多项指标确定皮肤状态,例如皮肤敏感度。皮肤敏感度主要是指皮肤对于外界刺激反应的剧烈程度。皮肤敏感度越高,对于外界刺激的反应越剧烈。如何能够有效且快速地让用户获知自身皮肤敏感度成为亟需解决的问题。
现有的皮肤检测技术,往往要求用户去到指定的医疗机构通过专业的器械进行检测,得到对应的皮肤检测报告。而该皮肤检测报告只是告知用户皮肤整体的敏感度,无法实时了解不同区域的具体敏感程度,检测报告的显示效果较差。
发明内容
本申请实施例提供了皮肤敏感度的显示方法、装置、电子设备及可读存储介质,可以在提高测量准确率的同时,降低测量成本。
第一方面,本申请实施例提供了一种皮肤敏感度的显示方法,应用于电子设备,包括:
获取包含拍摄对象的原始图像,并从所述原始图像中提取所述拍摄对象的皮肤区域图像;
确定所述皮肤区域图像内各个像素点对应的皮肤敏感度;
根据所述皮肤敏感度调整所述原始图像中所述皮肤区域图像内各个所述像素点的像素值,将调整像素值后的原始图像作为敏感度分布示意图,显示所述敏感度分布示意图。
实施本申请实施例具有以下有益效果:通过获取包含目标对象的测距图像,上述目标对象即为执行非接触式交互行为的用户,通过上述测距图像提取用于皮肤敏感度的显示测距参照参量,从而可以根据测距参照参量确定电子设备与目标对象之间的距离值,电子设备只需包含一个摄像模块即可实现,本实施例测量电子设备与目标对象之间的距离值时,并不依赖深度图像,或需要通过双目摄像头的拍摄角度差的方式来进行测距,因此电子设备无需电子设备配置基于光脉冲的收发器以及双目摄像头等模块,从而大大降低了电子设备的造价成本;与此同时,由于在进行距离测量的过程中,通过确定一种或多种测距参照参量进行距离测量,并非直接获取距离值,从而能够提高测距的准确性。
在第一方面的一种可能实现方式中,所述确定所述皮肤区域图像内各个像素点对 应的皮肤敏感度,包括:
获取所述像素点在所述原始图像的红色通道对应的第一像素值以及绿色通道对应的第二像素值;
将所述第一像素值以及所述第二像素值导入预设的敏感度转换算法,得到所述像素点对应的皮肤敏感度。
在第一方面的一种可能实现方式中,所述敏感度转换算法具体为:
Figure PCTCN2021113689-appb-000001
其中,f为所述皮肤敏感度;r为所述第一像素值;g为所述第二像素值;alpha和beta为预设的调整系数;exp(x)为指数函数。
在第一方面的一种可能实现方式中,所述获取包含拍摄对象的原始图像,包括:
获取实时采集的视频数据;
提取所述视频数据内的每一视频图像帧作为所述原始图像;
所述显示所述敏感度分布示意图,包括:
基于各个所述原始图像的帧编号,依次显示各个所述原始图像关联的所述敏感度分布示意图。
在第一方面的一种可能实现方式中,所述提取所述视频数据内的每一视频图像帧作为所述原始图像,包括:
基于所述帧编号,依次对所述原始图像进行图像分析,判断所述原始图像内是否包含皮肤区域;
若所述原始图像包含皮肤区域,则执行从所述原始图像中提取所述拍摄对象的皮肤区域图像的操作;
若所述原始图像不包含皮肤区域,则对下一帧编号对应的原始图像进行分析,并执行所述判断所述原始图像内是否包含皮肤区域的操作。
在第一方面的一种可能实现方式中,所述根据所述皮肤敏感度调整所述原始图像中所述皮肤区域内各个所述像素点的像素值,包括:
基于预设的像素映射关系,确定各个像素点对应的所述皮肤敏感度关联的调整像素值;
将所述像素点在所述原始图像中的原始像素值替换为所述调整像素值。
在第一方面的一种可能实现方式中,所述获取包含拍摄对象的原始图像,包括:
若处于敏感度检测模式,则开启装载有偏振片的补光模块,获取所述拍摄对象基于所述补光模块照射下的所述原始图像。
第二方面,本申请实施例提供了一种皮肤敏感度的显示装置,包括:
原始图像获取单元,用于获取包含拍摄对象的原始图像,并从所述原始图像中提取所述拍摄对象的皮肤区域图像;
皮肤敏感度确定单元,用于确定所述皮肤区域图像内各个像素点对应的皮肤敏感度;
敏感度分布示意图显示单元,用于根据所述皮肤敏感度调整所述原始图像中所述皮肤区域图像内各个所述像素点的像素值,将调整像素值后的原始图像作为敏感度分 布示意图,显示所述敏感度分布示意图。
第三方面,本申请实施例提供了一种电子设备,存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述皮肤敏感度的显示方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述第一方面中任一项所述皮肤敏感度的显示方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述第一方面中任一项所述皮肤敏感度的显示方法。
第六方面,本申请实施例提供一种芯片系统,包括处理器,处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现如第一方面中任一项所述皮肤敏感度的显示方法。
可以理解的是,上述第二方面至第六方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
图1是本申请实施例提供的电子设备的结构示意图;
图2是本申请实施例提供的另一电子设备的结构示意图;
图3是本申请实施例的电子设备的软件结构框图;
图4是本申请一实施例提供的智能美妆镜输出画面的示意图;
图5是本申请一实施例提供的皮肤敏感度的显示方法的实现流程图;
图6是本申请一实施例提供的电子设备获取原始图像的场景示意图;
图7是本申请一实施例提供的原始图像的选取界面图;
图8是本申请一实施例提供的S5012的具体实现流程图;
图9是本申请一实施例提供的皮肤区域图像的提取示意图;
图10是本申请一实施例提供的S502的具体实现流程图;
图11是本申请一实施例提供的敏感度分布示意图;
图12是本申请另一实施例提供的敏感度分布示意图;
图13是本申请一实施例提供的S503中根据所述皮肤敏感度调整所述原始图像中所述皮肤区域内各个所述像素点的像素值的具体实现流程图;
图14是本申请一实施例提供的皮肤敏感度的显示装置的结构框图;
图15是本申请一实施例提供的一种电子设备的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例提供的皮肤敏感度的显示方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能美妆镜等电子设备上,本申请实施例对电子设备的具体类型不作任何限制。
例如,所述电子设备可以是WLAN中的站点(STAION,ST),可以是蜂窝电话、无绳电话、会话启动协议(Session InitiationProtocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、电脑、膝上型计算机、手持式通信设备、手持式计算设备、和/或用于在无线系统上进行通信的其它设备以及下一代通信系统,例如,5G网络中的移动终端或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的移动终端等。
图1示出了电子设备100的一种结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。 在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接 口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波, 并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED), Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。显示屏194可包括触控面板以及其他输入设备。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,脸部识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C, 耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖 皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传 感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
图2示出了本申请实施例提供的另一电子设备的结构示意图。在该实施例中,该电子设备具体为一智能镜子,例如可以实时对获取得到的画面通过预设的算法进行处理后显示在镜面上。该智能镜子至少包括摄像模块201、显示模块202以及数据处理设备203。其中,摄像模块201可以用于获取包含拍摄对象的画面,并将拍摄得到的画面通过显示模块202进行显示,若需要对拍摄的画面进行处理,则可以通过数据处理设备203画面进行调整后,再通过显示模块202进行输出。
优选地,该智能镜子还可以包括补光灯204,电子设备可以在检测到当前场景环境光强较低的情况下,开启上述补光灯204进行补光操作,从而提高拍摄画面的整体亮度。可选地,该补光灯204装载有偏振片,通过控制偏振片的角度,以使补光灯发射预定颜色的照射光,例如可以是补光灯204照射红光和绿光,获取拍摄对象在红光和绿光照射下的原始图像。
电子设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图3是本申请实施例的电子设备的一种软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图3所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN, 蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。
实施例一:
在用户需要对皮肤进行检测时,可以通过以下三种方式实现:
方式1:通过皮肤检测设备进行皮肤检测。上述皮肤检测设备具体可以为医学用的皮肤检测装置,该皮肤检测装置由于造假成本以及设备体积等因素,往往是由医疗机构进行采购,用户在需要通过皮肤检测装置进行皮肤检测时,一般需要去到对应的医疗机构进行皮肤检测,并且输出的皮肤检测报告主要是给出皮肤检测的各项指标数值,例如黑色素含量、粉刺密度、粗糙度、组织氧含量、皮肤类型分类(如油性皮肤或干性皮肤)等,用户可以通过皮肤检测报告获取自身整体的皮肤状态。然而上述方式,检测难度较大,由于皮肤检测设备造价较高,用户较难在家中购置上述设备,用户需要去到医疗机构进行检测,从而大大增加了皮肤检测的难度以及成本。并且,上述皮肤检测设备无法获知局部皮肤状态,报告的显示效果较差。另一方面,上述检测装置的数据处理时长较长,实时性较低,需要用户等待较长时间才可以得到检测报告,报告获取效率低。
方式2:通过智能美妆镜进行皮肤检测。现有的智能美妆镜,一般只具备美肤、美白的功能,即美化人脸人像,并不具备皮肤检测的功能。而在进行人脸美化的过程中,可以通过智能识别算法识别脸部的雀斑、黑眼圈等,并对上述瑕疵区域进行美化、模糊、美白等处理。示例性地,图4示出了本申请一实施例提供的智能美妆镜输出画面的示意图。参见图4所示,智能美妆镜可以识别人脸瑕疵区域,并标记出来,以便用户了解当前的皮肤状况。但现有的智能美妆镜无法确定皮肤敏感度,皮肤检测项目较少,无法满足现有用户的皮肤检测需求。
方式3:通过智能手机上的应用程序进行皮肤检测。与通过智能美妆镜进行皮肤检测类似,安装有上述应用程序的智能手机往往只能对拍摄有用户人脸的图像进行美肤、美白的处理,以及识别用户脸部的瑕疵,同样无法确定皮肤敏感度,皮肤检测项目较少,无法满足现有用户的皮肤检测需求。
由此可见,上述三种方式均无法同时兼顾高效获取皮肤敏感度检测结果以及了解皮肤局部的敏感度分布两个方面。因此,为了解决上述皮肤检测技术的缺陷,本申请提供一种皮肤敏感度的显示方法,具体详述如下:参见图5所示,该皮肤敏感度的显示方法的执行主体为一电子设备,该电子设备可以为一智能手机、平板电脑、计算机、智能游戏机以及配置有摄像模块的任一设备,可选地,该电子设备可以为一智能镜子,通过加载于智能镜子上的摄像模块获取包含使用智能镜子的用户图像,并通过处理器对用户图像进行处理,继而在智能镜子的“镜面”(即显示模块)上输出处理后的用 户图像,以模拟用户照镜子的场景,并能够根据用户的需求,调整“镜面”呈现的画面。图5示出了本申请一实施例提供的皮肤敏感度的显示方法的实现流程图,详述如下:
在S501中,获取包含拍摄对象的原始图像,并从所述原始图像中提取所述拍摄对象的皮肤区域图像。
在本实施例中,电子设备可以内置有摄像模块,通过摄像模块获取包含拍摄对象的原始图像,拍摄对象可以将所需检测的部位移动至电子设备的拍摄区域内,以使拍摄得到的原始图像内包含所需检测部位的皮肤区域。示例性地,图6示出了本申请一实施例提供的电子设备获取原始图像的场景示意图。参见图6所示,电子设备具体为一智能镜子,智能镜子可以放置于桌面上,用户在坐姿状态下,智能镜子可以拍摄到用户的脸部,在该场景下,拍摄对象即为用户,而所需检测的皮肤区域即为脸部的皮肤区域。
在一种可能的实现方式中,电子设备可以接收外置的摄像模块反馈的原始图像。电子设备可以与外置的摄像模块通过有线接口或无线通信模块建立通信连接,并接收摄像模块采集得到的包含拍摄对象的原始图像。该摄像模块可以为专门用于进行皮肤检测的拍摄模块,例如配置多种不同发光器件的摄像机,可以通过不同发光器件,发射紫光、红光、紫外光以及红外光等。摄像模块可以获取拍摄对象在不同光照射下的原始图像,并将拍摄得到的原始图像通过与电子设备之间的通信链路反馈给电子设备。部分皮肤检测方法,例如进行皮肤敏感度检测或粉刺密度等,可能需要拍摄对象在特定的光照射下采集原始图像,基于此,上述摄像模块可以根据检测项目的需求,配置多个不同的发光器件,以便在进行对应的皮肤检测项目时,开启关联的发光器件,让拍摄对象在相应的光下照射,得到原始图像。
在一种可能的实现方式中,电子设备可以从图库中选取已拍摄得到的图像作为上述原始图像。图7示出了本申请一实施例提供的原始图像的选取界面图。参见图7中的(a)所示,电子设备可以在检测到用户点击图库控件701时,可以进入图库显示界面,如图7中的(b)。在电子设备的图库显示界面内可以显示有已经拍摄得到或从其他设备处获取得到的图像,电子设备在检测到对任一已有图像发起预设的选取操作时,则将该已有图像作为目标图像,对该目标图像执行皮肤敏感度的显示流程,将目标图像作为上述的原始图像,执行S501至S503的操作。举例性地,上述预设的选取操作可以为长按操作,若检测到用户对任一已有图像进行长按操作时,会弹出如图7中的(c)中的控件702,在控件702中具体为对已有图像可执行的编辑菜单,在该编辑菜单内包含有“皮肤敏感度检测”的项目,即控件703,电子设备若检测到用户点击控件703,则执行皮肤敏感度的显示流程;又例如,电子设备检测到用户点击任一已有图像,则可以显示该已有图像的预览页面,如图7中的(d),在该预览页面内包含有多个编辑控件,其中包括有皮肤敏感度检测的控件704,电子设备在检测到用户点击控件704时,可以执行皮肤敏感区的显示流程,输出用户预览的已有图像对应的敏感度分布示意图。
在一种可能的实现方式中,电子设备可以配置多个显示模式,上述显示模式包括但不限于:正常显示模式以及敏感度检测模式。其中,电子设备若处于正常显示模式 下,则可以直接显示并输出原始图像,即无需对原始图像进行其他处理;而在敏感度检测模式下,则表示用户需要查看自身皮肤敏感度,此时,则执行S501至S503的操作,以显示原始图像对应的敏感分布图像。
在一种可能的实现方式中,电子设备可以获取包含拍摄对象的实时视频数据,该实时实时视频数据具体为摄像模块以预设帧率拍摄得到的数据,每一帧对应一幅图像,电子设备可以从视频数据中提取各个视频图像帧,分别对各个视频图像帧进行皮肤敏感度的显示流程,即执行S501至S503的操作,分别输出各个视频图像帧对应的敏感度分布示意图,实现实时动态观察拍摄对象的皮肤敏感度,提高了皮肤敏感度的显示效果以及实时性。
进一步地,当电子设备获取的为实时视频数据时,S501具体可以包括S5011~S5012,详述如下:
在S5011中,获取实时采集的视频数据。
在本实施例中,电子设备配置有摄像模块,该摄像模块可以获取视频数据,摄像模块可以通过摄像模块实时获取拍摄对象的视频数据,并将实时获取得到的视频数据传输给电子设备的处理器进行处理,例如进行皮肤敏感度的显示处理。需要说明的是,由于摄像模块获取是实时采集拍摄对象的视频数据,在获取到每一帧视频图像后,摄像模块会将新增采集的视频数据传输给电子设备的处理器进行处理,即上述实时采集视频数据的过程中,也会实时对获取得到的视频数据进行后续步骤的处理,并在显示模块上显示处理后的视频数据,实现实时动态查看皮肤敏感度的目的。
在一种可能实现方式中,摄像模块会生成一个视频数据流,将实时采集到的视频数据封装为相应格式的数据包,并通过视频数据流实时传输采集得到的各个视频数据的数据包,以便对视频数据进行实时处理。
在S5012中,提取所述视频数据内的每一视频图像帧作为所述原始图像。
在本实施例中,电子设备在获取得到视频数据后,可以对视频数据进行解析,提取视频数据内包含的每个视频图像帧,并将获取得到视频图像帧作为上述的原始图像,以生成每个视频图像帧对应的敏感度分布示意图。
在本申请实施例中,电子设备可以实时获取目标对象的视频数据,并分别对视频数据内每一视频图像帧进行皮肤敏感度的显示处理,得到各个视频图像帧对应的敏感度分布示意图,以便用户实时查看皮肤的敏感度状况,提高了皮肤敏感度的显示效果,以及查看的实时性,提升用户的使用体验。
进一步地,作为本申请的另一实施例,图8是本申请一实施例提供的S5012的具体实现流程图。参见图8所示,与上一实施例相比,本申请实施例中的S5012具体包括S801~S803,具体描述如下:
进一步地,所述提取所述视频数据内的每一视频图像帧作为所述原始图像包括:
在S801中,基于所述帧编号,依次对所述原始图像进行图像分析,判断所述原始图像内是否包含皮肤区域。
在本实施例中,电子设备在实时获取拍摄对象的视频数据时,会根据拍摄时间的先后次序,为每个视频图像帧关联一个帧编号,并基于上述帧编号依次处理各个视频图像帧。在本实施例中,由于显示敏感度分布示意图的原始图像,必定需要包含皮肤 区域的图像,若该原始图像内并不包含皮肤区域,则无需对原始图像进行处理;反之,若该原始图像内包含皮肤区域,则需要从原始图像中提取皮肤区域对应的区域图像,并执行S502以及S503的操作。
在一种可能的实现方式中,电子设备可以配置有皮肤识别算法,电子设备将原始图像导入到皮肤识别算法内,可以输出关于该原始图像对应的识别结果,该识别结果包括:包含皮肤区域的第一结果以及不包含皮肤区域的第二结果。电子设备可以根据上述识别结果,确定执行S802或是S803的操作。
在一种可能的实现方式中,上述皮肤识别算法具体为一卷积神经网络。电子设备可以基于上述帧编号,依次将各个原始图像导入到上述的卷积神经网络,卷积神经网络可以通过内置的多个级联的卷积核,对上述原始图像进行多次卷积运算,得到N层卷积向量,其中,N具体为上述卷积神经网络内包含的卷积层的个数,每个卷积层对应一个上述卷积核,将第N层输出的卷积向量导入到对应的全连接层,识别该原始图像内是否包含皮肤区域,得到识别结果。电子设备通过多重卷积层对原始图像进行特征提取,以判断原始图像内是否包含与皮肤关联的特征信息,从而识别出是否包含皮肤区域。
在一种可能的实现方式中,皮肤的颜色在特定的区间范围内,并且皮肤区域具有面积大且画面平整等特点。基于上述两点,电子设备可以根据原始图像内各个像素点的像素值,判断上述像素点的像素值是否在上述预设的区间范围内,若是,则统计在上述区间范围内的像素点的个数,以及上述的多个在区间范围内的像素点是否相邻,若相邻且个数大于预设的个数阈值,则识别该原始图像包含皮肤区域。
在一种可能的实现方式中,电子设备可以获取拍摄原始图像时的环境光强,基于所述环境光强对原始图像内的各个像素值进行光强补偿,并根据调整后的原始图像在进行上述区域范围的识别操作,避免因环境光强过亮或过暗,导致原始图像内的像素值产生色偏,使得皮肤区域的像素值偏离上述的区间范围,以提高皮肤区域识别的准确性。
在S802中,若所述原始图像包含皮肤区域,则执行从所述原始图像中提取所述拍摄对象的皮肤区域图像的操作。
在本实施例中,电子设备在检测到当前帧编号对应的原始图像内包含皮肤区域,则可以从原始图像中提取该皮肤区域关联的皮肤区域图像。
在S803中,若所述原始图像不包含皮肤区域,则对下一帧编号对应的原始图像进行分析,并执行所述判断所述原始图像内是否包含皮肤区域的操作。
在本实施例中,电子设备在检测到当前帧编号对应的原始图像不包含皮肤区域时,则无需生成该原始图像对应的敏感度分布示意图,并获取下一帧编号的原始图像,执行皮肤区域的识别的操作。
在一种可能的实现方式中,若检测到当前帧编号对应的原始图像不包含皮肤区域,则直接显示原始图像。
在本申请实施例中,在确定原始图像的皮肤敏感度之前,首先判断该原始图像内是否包含皮肤区域,对不包含皮肤区域的原始图像不进行敏感度分布示意图的转换操作;只对包含皮肤区域的原始图像进行处理,从而减少了不必要的处理操作,提高了 的转换效率。
在本实施例中,电子设备可以识别出拍摄对象的皮肤在原始图像所覆盖的区域,即上述的皮肤区域图像,并从原始图像中提取出皮肤区域图像。由于在确定皮肤敏感度时,主要的识别对象为拍摄对象的皮肤区域,而对非拍摄对象的皮肤的其他区域,则无需进行敏感度识别。
在一种可能的实现方式中,电子设备确定皮肤区域图像的方式可以为:通过轮廓识别算法提取该原始图像内包含的多个轮廓曲线,并基于各个轮廓曲线所封闭的图像区域,确定各个图像区域的图像特征值,上述图像特征值包括但不限于以下至少一种:平均像素值、像素个数、像素值的均方差等。若图像特征值满足预设的皮肤特征条件,则识别该轮廓曲线所封闭的图像区域为皮肤区域,实现了从原始图像中提取皮肤区域图像。
在一种可能的实现方式中,电子设备确定皮肤区域图像的方式还可以为:通过初始尺寸的图像框在原始图像上进行滑动框取,在滑动框取的过程中识别图像框当前覆盖的区域是否包含皮肤区域,若包含,则对该区域进行标记,在滑动框取完毕后,得到包含多个标记的区域,此时,基于预设的调整步长缩小上述的图像框,对多个标记的区域执行上述的框取识别操作,直到所述图像框的尺寸小于预设的下限阈值,此时,最后标记的区域即为皮肤区域图像。
示例性地,图9示出了本申请一实施例提供的皮肤区域图像的提取示意图。参见图9中的(a)所示,该原始图像具体为包含手部区域的图像,通过预设的皮肤区域识别算法,确定该原始图像内包含的皮肤区域图像对应的轮廓信息,即图9中的(b)所示;并基于该轮廓信息从原始图像中提取出皮肤区域图像,即图9中的(c)。
在S502中,确定所述皮肤区域图像内各个像素点对应的皮肤敏感度。
在本实施例中,电子设备可以配置有皮肤敏感度的转换算法,将原始图像中的皮肤区域图像内的各个像素点导入到上述的转换算法,计算得到各个像素点对应的皮肤敏感度。
在本实施例中,通过任一摄像模块获取得到拍摄对象的原始图像,即可以确定皮肤区域敏感度的原因具体为:皮肤敏感的区域,在临床上的外部皮肤表现上都会有呈现出泛红的现象。皮肤敏感一般是受到像药物、局部所敷药剂或环境成分刺激等内部和外部因素的影响而导致的。在皮肤测试、医学研究和临床药物研究中,了解皮肤对这些刺激物的生理反应就变得越来越重要了,皮肤组织活性测试就是要了解皮肤组织的微血管网络对增加血液流动的血管扩张和降低血液流动的血管阻塞的反应的能力,从而了解到皮肤对刺激物的过敏性发硬、发炎过程和刺激性。其中,皮肤敏感具体可以分为以下三个类型:
1.敏感性皮肤:敏感表示的是一种皮肤常态,是指感受灵敏度高、对外界抵抗力弱且受刺激后产生反应明显的皮肤。例如在受到外界环境影响、错误使用化妆品、过渡清洁造成皮肤的角质层被破坏等,均会导致皮肤的屏障功能不全、皮肤水合度较低,从而使得皮肤较薄且脆弱,皮下毛细血管扩张。
2.皮肤刺激:又称刺激性接触性皮炎,表示一种现象,指皮肤受到化学刺激物临 界浓度或以上的刺激后快速产生红、肿、热、痛、痒等现象。例如在短时间内接触过多刺激物,造成表皮损伤、屏障破坏引发细胞应激反应和免疫反应,激活炎症。
3.皮肤过敏:又称过敏性接触性皮炎,表示一种现象,是指皮肤受到过敏原的作用在短时间内后产生红、肿、热、痛、痒等现象。由于抗原初次接触皮肤诱导致敏免疫,T细胞需要一段时间后才会发生反应;致敏后如再次接触抗原则会在短时间内出现过敏性接触性皮炎。皮肤过敏是由化学物激发的异常免疫反应引起的,而低分子量的化学物质(半抗原)是引发皮肤过敏的主要原因,上述反应也会导致皮下毛细血管扩张。
上述任一中皮肤敏感的现象,均会伴随皮下毛细血管扩张,即皮肤表面呈现泛红的现象,因此,通过判断用户皮肤表面的泛红情况,即可以确定拍摄对象的皮肤敏感度。
在一种可能的实现方式中,计算皮肤敏感度时,需要确定各个像素点的红绿蓝通道的数值,即该皮肤区域图像具体为一RGB图像。基于此,终端设备可以识别皮肤区域图像的图像格式,判断该图像格式是否为RGB格式的图像,若是,则执行S502的操作;反之,则可以通过图像格式转换算法,将皮肤区域图像转换为RGB格式,并基于转换后的皮肤区域图像执行S502的操作。
在一种可能的实现方式中,电子设备确定各个像素点对应的皮肤敏感度的方式可以为:电子设备可以存储有一皮肤敏感度与像素值之间的映射关系,电子设备可以将像素点的像素值导入到上述映射关系中,从而可以计算得到该像素点对应的皮肤敏感度。
在一种可能的实现方式中,根据上述皮肤敏感的三种不同的分类,上述皮肤敏感度具体可以用于表征敏感类型,即具体为皮肤过敏、皮肤刺激或是敏感性皮肤等。电子设备为不同的皮肤敏感类型配置对应的敏感数值,若检测到任一像素点所随影的敏感类型,则将该敏感类型关联的敏感数值作为该像素点的皮肤敏感度。
作为本申请的另一实施例,图10示出了本申请一实施例提供的S502的具体实现流程图。参见图10所示,与图5所示的实施例相比,本实施例中的S502具体包括S5021~S5022,详述如下:
进一步地,所述确定所述皮肤区域图像内各个像素点对应的皮肤敏感度,包括:
在S5021中,获取所述像素点在所述原始图像的红色通道对应的第一像素值以及绿色通道对应的第二像素值。
在本实施例中,皮肤组织中毛细血管的红细胞具有强烈吸收绿光的特征,对红光的吸收很少,而对比之下皮下真皮层组织对各种波长的光线吸收较少,因而能够实现可见光对血红素的检测;而敏感皮肤往往会出现泛红的情况,而泛红正是由于毛细血管中红细胞的数量较多而导致的。基于此,电子设备可以拍摄拍摄对象的皮肤区域在包含绿光和红光(例如白光或红绿偏振光等)的照射光下的图像,并基于绿色通道以及红色通道中各个皮肤区域的像素值,确定皮肤敏感度。
在本实施例中,该原始图像具体为一RGB图像,即原始图像中各个像素点均在每个通道对应一个像素值,分别为红色通道的第一像素值、绿色通道的第二像素值以及蓝色通道的第三像素值。由于需要确定皮肤区域图像中各个像素点对应皮肤对于绿光 以及红光的吸收程度,因此获取红色通道的第一像素值以及绿色通道的第二像素值。
在S5022中,将所述第一像素值以及所述第二像素值导入预设的敏感度转换算法,得到所述像素点对应的皮肤敏感度。
在本实施例中,电子设备可以将获取得到的第一像素值以及第二像素值导入到敏感度转换算法内,确定该像素点对应的皮肤区域的皮肤敏感度。其中,若像素点对应的皮肤区域对于红光的吸收程度越少,则该像素点对应的第一像素值越大,即皮肤敏感度与第一像素值之间是正相关的关系;若像素点对应的皮肤区域对于绿光的吸收程度越高,则该像素点对应的第二像素值越小,即皮肤敏感度与第二像素值之间是负相关的关系。
在一种可能的实现方式中,上述敏感度转换算法可以为一哈希转换函数,将第一像素值以及第二像素值导入到上述的哈希转换函数内,可以得到该像素点对应的皮肤敏感度。
在一种可能的实现方式中,上述敏感度转换算法还可以是基于多个训练图像对神经网络进行训练后得到的。电子设备可以将多个训练图像内的红色图层以及绿色图层作为神经网络的输入,将训练图像对应的敏感度训练分布图作为神经网络的输出,对上述神经网络进行训练,从而将训练后的神经网络作为上述的敏感度转换算法,并将各个像素点对应的第一像素值以及第二像素值导入到商户的敏感度转换算法内,从而计算得到各个像素点对应的皮肤敏感度。
进一步地,上述敏感度转换算法具体为:
Figure PCTCN2021113689-appb-000002
其中,f为所述皮肤敏感度;r为所述第一像素值;g为所述第二像素值;alpha和beta为预设的调整系数;exp(x)为指数函数。
在本实施例中,电子设备可以根据红色通道的第一像素值与绿色通道的第二像素值之间的差值,确定两者之间的对于红光以及绿光的吸收偏差,从而可以确定该像素点对应的皮肤区域的红细胞的浓度,并基于红细胞的浓度确定该皮肤区域的皮肤敏感度,从而提高了皮肤敏感度的准确性。
在本申请实施例中,根据红细胞吸收不同颜色光的特性,确定红色通道的第一像素值与绿色通道的第二像素值,从而得到了像素点对应的皮肤敏感度,减少了皮肤敏感度的获取难度的同时,还能够保证皮肤敏感度的准确性。
在S503中,根据所述皮肤敏感度调整所述原始图像中所述皮肤区域图像内各个所述像素点的像素值,将调整像素值后的原始图像作为敏感度分布示意图,显示所述敏感度分布示意图。
在本实施例中,为了让用户能够直观确定皮肤区域各个局部的皮肤敏感度,电子设备可以通过改变原始图像内皮肤区域内各个像素点的像素值,从而皮肤敏感度的分布情况,而并非显示敏感度数值,以提高显示效果。基于此,电子设备可以根据皮肤敏感度调整皮肤区域内各个像素点的像素值,将调整了像素值后的原始图像,作为用于表示皮肤区域的皮肤敏感度分布情况的敏感度分布示意图。
在一种可能的实现方式中,电子设备可以将除皮肤区域图像外的其他区域图像识 别为背景区域图像,将背景区域图像内各个像素点的像素值调整为预设数值,例如将背景区域图像内的所有像素点设置为黑色,从而可以避免画面的背景影响皮肤区域的敏感度分布的显示效果。示例性地,图11示出了本申请一实施例提供的敏感度分布示意图。如图11所示,该敏感度分布示意图中背景区域内各个像素点设置有统一的像素值,以便突出皮肤区域的敏感度分布情况,并且配置有对应的敏感度与像素值之间的对照表,以便用户确定不同像素值对应的皮肤敏感度。
在一种可能的实现方式中,电子设备可以从原始图像中裁剪皮肤区域图像,并调整皮肤区域图像内各个像素点的像素值,并将调整像素值后的皮肤区域图像作为上述敏感度分布示意图进行显示。示例性地,图12示出了本申请另一实施例提供的敏感度分布示意图。如图12所示,该敏感度分布示意图只包含皮肤区域,并不包含背景区域,并且配置有对应的敏感度与像素值之间的对照表,以便用户确定不同像素值对应的皮肤敏感度。
进一步地,作为本申请的另一实施,图13示出了本申请一实施例提供的S503中根据所述皮肤敏感度调整所述原始图像中所述皮肤区域内各个所述像素点的像素值的具体实现流程图。参见图13所示,与图5所示的实施例相比,本实施例中的S503具体包括S5031~S5032,详述如下:
进一步地,所述根据所述皮肤敏感度调整所述原始图像中所述皮肤区域内各个所述像素点的像素值,包括:
在S5031中,基于预设的像素映射关系,确定各个像素点对应的所述皮肤敏感度关联的调整像素值。
在本实施例中,电子设备可以建立有像素映射关系,该像素映射关系具体限定了各个皮肤敏感度在敏感度分布示意图中对应的像素值。电子设备可以查询上述像素映射关系内各个像素点关联的像素值,将查询得到的像素值作为上述的调整像素值。
在一种可能的实现方式中,上述像素映射关系具体可以为一转换算法,将该像素点的皮肤敏感度导入到上述的转换算法内,可以确定该皮肤敏感度对应的调整像素值。
在一种可能的实现方式中,上述调整像素值具体可以为包含多个不同通道对应的像素值。若该原始图像为一RGB图像,则上述调整像素值具体包含红色通道、绿色通道以及蓝色通道三个通道的像素值;若该原始图像为一印刷色彩模式CMYK图像,则上述调整像素值具体包含青色通道、洋红通道、黄色通道以及黑色通道四个通道的像素值。
在S5032中,将所述像素点在所述原始图像中的原始像素值替换为所述调整像素值。
在本实施例中,电子设备可以将各个皮肤区域图像内的像素点确定的调整像素值,用于替换在原始图像内的像素值,实现对原始图像内的像素值进行调整的目的。
在本申请实施例中,通过设置像素映射关系,并查询各个像素点对应的调整像素值,基于调整像素值对原始图像内各个像素点的像素值进行设置,实现了自动生成敏感度分布示意图,提高了生成效率。
进一步地,作为本申请的另一实施例,若上述获取的原始图像是基于实施拍摄的视频数据基于帧编号依次提取得到的,则在S503中,生成并显示敏感度分布示意图具 体可以为:基于各个所述原始图像的帧编号,依次显示各个所述原始图像关联的所述敏感度分布示意图。
在本实施例中,电子设备会根据帧编号依次确定各个原始图像的皮肤区域中各个像素点的皮肤敏感度,并基于皮肤敏感度调整各个像素点在原始图像中的像素值,以生成该原始图像的敏感度分布示意图。基于此,电子设备也可以基于各个原始图像对应的帧编号,依次显示各个原始图像对应的敏感度分布示意图,以是实现动态显示皮肤敏感度,提高显示效果。
以上可以看出,本申请实施例提供的一种皮肤敏感度的显示方法可以通过获取包含拍摄对象的原始图像,从原始图像中确定皮肤区域图像,根据该皮肤区域图像内各个像素点的像素值,得到每个像素点关联的皮肤敏感度,并根据确定得到的皮肤敏感度调整原始图像内对应像素点的像素值,从而生成敏感度分布示意图,从而用户能够在敏感度分布示意图中了解整体皮肤敏感度的同时,也能够确定每个局部皮肤对应的敏感度。与现有的皮肤检测技术相比,生成的敏感度分布示意图是基于原始图像通过调整像素点的像素值后生成的,因此,皮肤区域的轮廓与原始图像是一致的,通过查看敏感度分布示意图可以确定各个局部区域对应的敏感度,提高了显示效果;另一方面,由于敏感度分布示意图的生成过程只需通过包含摄像模块的电子设备即可完成,也无需用户去到特定的医疗机构完成,大大提高了皮肤敏感度获取的便捷性,以及降低了获取难度,提高了皮肤检测的效率。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
实施例二:
对应于上文实施例所述的皮肤敏感度的显示方法,图14示出了本申请实施例提供的皮肤敏感度的显示装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图14,该皮肤敏感度的显示装置包括:
原始图像获取单元141,用于获取包含拍摄对象的原始图像,并从所述原始图像中提取所述拍摄对象的皮肤区域图像;
皮肤敏感度确定单元142,用于确定所述皮肤区域图像内各个像素点对应的皮肤敏感度;
敏感度分布示意图显示单元143,用于根据所述皮肤敏感度调整所述原始图像中所述皮肤区域内各个所述像素点的像素值,生成并显示敏感度分布示意图。
可选地,所述皮肤敏感度确定单元142包括:
像素值获取单元,用于获取所述像素点在所述原始图像的红色通道对应的第一像素值以及绿色通道对应的第二像素值;
皮肤敏感度转换单元,用于将所述第一像素值以及所述第二像素值导入预设的敏感度转换算法,得到所述像素点对应的皮肤敏感度。
可选地,所述敏感度转换算法具体为:
Figure PCTCN2021113689-appb-000003
其中,f为所述皮肤敏感度;r为所述第一像素值;g为所述第二像素值;alpha和beta为预设的调整系数;exp(x)为指数函数。
可选地,所述原始图像获取单元141包括:
视频数据获取单元,用于获取实时采集的视频数据;
视频图像帧提取单元,用于提取所述视频数据内的每一视频图像帧作为所述原始图像;
所述敏感度分布示意图显示单元143具体用于:
基于各个所述原始图像的帧编号,依次显示各个所述原始图像关联的所述敏感度分布示意图。
可选地,所述视频图像帧提取单元包括:
皮肤区域识别单元,用于基于所述帧编号,依次对所述原始图像进行图像分析,判断所述原始图像内是否包含皮肤区域;
第一操作单元,用于若所述原始图像包含皮肤区域,则执行从所述原始图像中提取所述拍摄对象的皮肤区域图像的操作;
第二操作单元,用于若所述原始图像不包含皮肤区域,则对下一帧编号对应的原始图像进行分析,并执行所述判断所述原始图像内是否包含皮肤区域的操作。
可选地,所述敏感度分布示意图显示单元143包括:
调整像素值查询单元,用于基于预设的像素映射关系,确定各个像素点对应的所述皮肤敏感度关联的调整像素值;
调整像素值替换单元,用于将所述像素点在所述原始图像中的原始像素值替换为所述调整像素值。
可选地,所述原始图像获取单元141包括:
偏振光拍摄单元,用于若处于敏感度检测模式,则开启装载有偏振片的补光模块,获取所述拍摄对象基于所述补光模块照射下的所述原始图像。
因此,本申请实施例提供的皮肤敏感度的显示装置同样可以通过获取包含拍摄对象的原始图像,从原始图像中确定皮肤区域图像,根据该皮肤区域图像内各个像素点的像素值,得到每个像素点关联的皮肤敏感度,并根据确定得到的皮肤敏感度调整原始图像内对应像素点的像素值,从而生成敏感度分布示意图,从而用户能够在敏感度分布示意图中了解整体皮肤敏感度的同时,也能够确定每个局部皮肤对应的敏感度。与现有的皮肤检测技术相比,生成的敏感度分布示意图是基于原始图像通过调整像素点的像素值后生成的,因此,皮肤区域的轮廓与原始图像是一致的,通过查看敏感度分布示意图可以确定各个局部区域对应的敏感度,提高了显示效果;另一方面,由于敏感度分布示意图的生成过程只需通过包含摄像模块的电子设备即可完成,也无需用户去到特定的医疗机构完成,大大提高了皮肤敏感度获取的便捷性,以及降低了获取难度,提高了皮肤检测的效率。
图15为本申请一实施例提供的电子设备的结构示意图。如图15所示,该实施例的电子设备15包括:至少一个处理器150(图15中仅示出一个)处理器、存储器151 以及存储在所述存储器151中并可在所述至少一个处理器150上运行的计算机程序152,所述处理器150执行所述计算机程序152时实现上述任意各个皮肤敏感度的显示方法实施例中的步骤。
所述电子设备15可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该电子设备可包括,但不仅限于,处理器150、存储器151。本领域技术人员可以理解,图15仅仅是电子设备15的举例,并不构成对电子设备15的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所称处理器150可以是中央处理单元(Central Processing Unit,CPU),该处理器150还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器151在一些实施例中可以是所述电子设备15的内部存储单元,例如电子设备15的硬盘或内存。所述存储器151在另一些实施例中也可以是所述电子设备15的外部存储设备,例如所述电子设备15上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器151还可以既包括所述电子设备15的内部存储单元也包括外部存储设备。所述存储器151用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器151还可以用于暂时地存储已经输出或者将要输出的数据。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种电子设备,该电子设备包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的 步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/电子设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种皮肤敏感度的显示方法,其特征在于,包括:
    获取包含拍摄对象的原始图像,并从所述原始图像中提取所述拍摄对象的皮肤区域图像;
    确定所述皮肤区域图像内各个像素点对应的皮肤敏感度;
    根据所述皮肤敏感度调整所述原始图像中所述皮肤区域图像内各个所述像素点的像素值,将调整像素值后的原始图像作为敏感度分布示意图,显示所述敏感度分布示意图。
  2. 根据权利要求1所述的显示方法,其特征在于,所述确定所述皮肤区域图像内各个像素点对应的皮肤敏感度,包括:
    获取所述像素点在所述原始图像的红色通道对应的第一像素值以及绿色通道对应的第二像素值;
    将所述第一像素值以及所述第二像素值导入预设的敏感度转换算法,得到所述像素点对应的皮肤敏感度。
  3. 根据权利要求2所述的显示方法,其特征在于,所述敏感度转换算法具体为:
    Figure PCTCN2021113689-appb-100001
    其中,f为所述皮肤敏感度;r为所述第一像素值;g为所述第二像素值;alpha和beta为预设的调整系数;exp(x)为指数函数。
  4. 根据权利要求1所述的显示方法,其特征在于,所述获取包含拍摄对象的原始图像,包括:
    获取实时采集的视频数据;
    提取所述视频数据内的每一视频图像帧作为所述原始图像;
    所述显示所述敏感度分布示意图,包括:
    基于各个所述原始图像的帧编号,依次显示各个所述原始图像关联的所述敏感度分布示意图。
  5. 根据权利要求4所述的显示方法,其特征在于,所述提取所述视频数据内的每一视频图像帧作为所述原始图像,包括:
    基于所述帧编号,依次对所述原始图像进行图像分析,判断所述原始图像内是否包含皮肤区域;
    若所述原始图像包含皮肤区域,则执行从所述原始图像中提取所述拍摄对象的皮肤区域图像的操作;
    若所述原始图像不包含皮肤区域,则对下一帧编号对应的原始图像进行分析,并执行所述判断所述原始图像内是否包含皮肤区域的操作。
  6. 根据权利要求1-5任一项所述的显示方法,其特征在于,所述根据所述皮肤敏感度调整所述原始图像中所述皮肤区域内各个所述像素点的像素值,包括:
    基于预设的像素映射关系,确定各个像素点对应的所述皮肤敏感度关联的调整像素值;
    将所述像素点在所述原始图像中的原始像素值替换为所述调整像素值。
  7. 根据权利要求1-5任一项所述的显示方法,其特征在于,所述获取包含拍摄对象的原始图像,包括:
    若处于敏感度检测模式,则开启装载有偏振片的补光模块,获取所述拍摄对象基于所述补光模块照射下的所述原始图像。
  8. 一种皮肤敏感度的显示装置,其特征在于,包括:
    原始图像获取单元,用于获取包含拍摄对象的原始图像,并从所述原始图像中提取所述拍摄对象的皮肤区域图像;
    皮肤敏感度确定单元,用于确定所述皮肤区域图像内各个像素点对应的皮肤敏感度;
    敏感度分布示意图显示单元,用于根据所述皮肤敏感度调整所述原始图像中所述皮肤区域图像内各个所述像素点的像素值,将调整像素值后的原始图像作为敏感度分布示意图,显示所述敏感度分布示意图。
  9. 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。
PCT/CN2021/113689 2020-09-09 2021-08-20 皮肤敏感度的显示方法、装置、电子设备及可读存储介质 WO2022052786A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010944311.9A CN114241347A (zh) 2020-09-09 2020-09-09 皮肤敏感度的显示方法、装置、电子设备及可读存储介质
CN202010944311.9 2020-09-09

Publications (1)

Publication Number Publication Date
WO2022052786A1 true WO2022052786A1 (zh) 2022-03-17

Family

ID=80630254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113689 WO2022052786A1 (zh) 2020-09-09 2021-08-20 皮肤敏感度的显示方法、装置、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN114241347A (zh)
WO (1) WO2022052786A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272138B (zh) * 2022-09-28 2023-02-21 荣耀终端有限公司 图像处理方法及其相关设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1795827A (zh) * 2004-12-24 2006-07-05 重庆融海超声医学工程研究中心有限公司 皮肤和皮下组织损伤的图像监测装置和方法
US20090226093A1 (en) * 2008-03-03 2009-09-10 Canon Kabushiki Kaisha Apparatus and method for detecting specific object pattern from image
KR20120066286A (ko) * 2010-12-14 2012-06-22 한국전자통신연구원 광선치료 장치 및 방법
CN103152476A (zh) * 2013-01-31 2013-06-12 广东欧珀移动通信有限公司 检测皮肤状态的手机及其使用方法
CN106611415A (zh) * 2016-12-29 2017-05-03 北京奇艺世纪科技有限公司 一种皮肤区域的检测方法及装置
CN108921128A (zh) * 2018-07-19 2018-11-30 厦门美图之家科技有限公司 脸颊敏感肌识别方法及装置
CN110363088A (zh) * 2019-06-12 2019-10-22 南京理工大学 基于多特征融合的自适应皮肤炎症区域检测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1795827A (zh) * 2004-12-24 2006-07-05 重庆融海超声医学工程研究中心有限公司 皮肤和皮下组织损伤的图像监测装置和方法
US20090226093A1 (en) * 2008-03-03 2009-09-10 Canon Kabushiki Kaisha Apparatus and method for detecting specific object pattern from image
KR20120066286A (ko) * 2010-12-14 2012-06-22 한국전자통신연구원 광선치료 장치 및 방법
CN103152476A (zh) * 2013-01-31 2013-06-12 广东欧珀移动通信有限公司 检测皮肤状态的手机及其使用方法
CN106611415A (zh) * 2016-12-29 2017-05-03 北京奇艺世纪科技有限公司 一种皮肤区域的检测方法及装置
CN108921128A (zh) * 2018-07-19 2018-11-30 厦门美图之家科技有限公司 脸颊敏感肌识别方法及装置
CN110363088A (zh) * 2019-06-12 2019-10-22 南京理工大学 基于多特征融合的自适应皮肤炎症区域检测方法

Also Published As

Publication number Publication date
CN114241347A (zh) 2022-03-25

Similar Documents

Publication Publication Date Title
JP7238115B2 (ja) 写真撮影シナリオおよび電子デバイスで画像を表示するための方法
WO2020211701A1 (zh) 模型训练方法、情绪识别方法及相关装置和设备
WO2020029306A1 (zh) 一种图像拍摄方法及电子设备
WO2022127787A1 (zh) 一种图像显示的方法及电子设备
WO2021258814A1 (zh) 视频合成方法、装置、电子设备及存储介质
WO2021218540A1 (zh) 天线功率调节方法、终端设备及存储介质
WO2020088633A1 (zh) 支付方法、装置和用户设备
WO2021052139A1 (zh) 手势输入方法及电子设备
WO2022001258A1 (zh) 多屏显示方法、装置、终端设备及存储介质
WO2020056684A1 (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
WO2022042766A1 (zh) 信息显示方法、终端设备及计算机可读存储介质
WO2020107463A1 (zh) 一种电子设备的控制方法及电子设备
WO2021164300A1 (zh) 数据展示方法、终端设备及存储介质
WO2021218429A1 (zh) 应用窗口的管理方法、终端设备及计算机可读存储介质
WO2022143180A1 (zh) 协同显示方法、终端设备及计算机可读存储介质
WO2022160991A1 (zh) 权限控制方法和电子设备
WO2022052786A1 (zh) 皮肤敏感度的显示方法、装置、电子设备及可读存储介质
CN116048217B (zh) 一种电子设备运行方法、装置和电子设备
WO2023029916A1 (zh) 批注展示方法、装置、终端设备及可读存储介质
WO2022242412A1 (zh) 杀应用的方法及相关设备
WO2022078116A1 (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质
WO2022170856A1 (zh) 建立连接的方法与电子设备
WO2022170854A1 (zh) 视频通话的方法与相关设备
WO2022022406A1 (zh) 一种灭屏显示的方法和电子设备
US20230298300A1 (en) Appearance Analysis Method and Electronic Device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865842

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21865842

Country of ref document: EP

Kind code of ref document: A1