WO2020173152A1 - Facial appearance prediction method and electronic device - Google Patents

Facial appearance prediction method and electronic device Download PDF

Info

Publication number
WO2020173152A1
WO2020173152A1 PCT/CN2019/120085 CN2019120085W WO2020173152A1 WO 2020173152 A1 WO2020173152 A1 WO 2020173152A1 CN 2019120085 W CN2019120085 W CN 2019120085W WO 2020173152 A1 WO2020173152 A1 WO 2020173152A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
facial image
electronic device
image
health data
Prior art date
Application number
PCT/CN2019/120085
Other languages
French (fr)
Chinese (zh)
Inventor
董继阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020173152A1 publication Critical patent/WO2020173152A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • This application relates to the field of image processing technology, and in particular to a facial prediction method and electronic equipment.
  • some Meitu applications can beautify the wrinkles, spots, and acne that appear in the user's facial image, and make the beautified user's facial image younger.
  • some applications provide the function of predicting the facial image of the user's appearance after aging.
  • This type of application can obtain a photo containing an image of the user's face, and further, the application can use an aging algorithm to age the facial image according to the passage of time, so as to obtain an image of the user's facial aging after a certain period of time (for example, 10 years, 20 years).
  • a certain period of time for example, 10 years, 20 years.
  • the present application provides a facial appearance prediction method and electronic device that can truly simulate the changes in the user's facial appearance based on the user's living habits, so that the user can intuitively feel the changes in the facial appearance, thereby reminding the user to adjust bad living habits in time.
  • a first aspect provides a face prediction method, including: an electronic device acquires a first image, the first image includes a first facial image of a user, of course, the first image may also include a bust or a bust containing the first facial image. Full-length images, etc.; and, the electronic device may obtain the user's health data within a preset time.
  • the health data may include at least one of the user's exercise data, sleep data, nutritional intake data, or time data of using the electronic device
  • the electronic device can predict the appearance of the user to obtain the predicted second image, which can include the user's second facial image; further, the electronic device can The first interface is displayed, and the first interface includes the predicted second facial image.
  • the first interface may also include the first facial image of the user before the prediction.
  • the appearance change predicted by the electronic device for the user can change with the user's health data, that is, the appearance change predicted by the electronic device for the user is closely related to the user's actual living habits.
  • the electronic device not only considers time-lapse factors when predicting the user’s appearance for the user, but also combines the user’s health data to make the prediction result of the user’s appearance more accurate, so that the user can intuitively feel the change in the appearance, thereby Remind users to timely adjust bad living habits or maintain good living habits.
  • the electronic device before the electronic device acquires the above-mentioned first image, it further includes: the electronic device displays a second interface of the prediction application, and the second interface includes a button for the facial prediction function; at this time, the electronic device acquires the first image.
  • An image specifically includes: in response to the user clicking the button, the electronic device can use a camera to obtain the first image; or, in response to the user clicking the button, the electronic device can obtain a photo from the album application as the above The first image.
  • the method further includes: the electronic device may determine that the health data meets a preset condition, for example, the preset condition includes that the health data is greater than a preset value, or the health data is less than a preset value .
  • the preset condition includes that the health data is greater than a preset value, or the health data is less than a preset value .
  • the electronic device can automatically obtain the user's first facial image, and combine the user's health data to predict the user's appearance, thereby reminding the user in time Adjust bad habits.
  • the electronic device can automatically obtain the user's first facial image, and combine the user's health data to predict the user's appearance, thereby reminding the user to maintain good health Life habits.
  • the method further includes: the electronic device displays a notification message that the appearance prediction is completed;
  • the device displaying the first interface includes: in response to the user's operation of opening the notification message, the electronic device opens the prediction application and displays the first interface of the prediction application.
  • the electronic device predicts the user's appearance based on the aforementioned health data and the first facial image to obtain the second image, including: the electronic device determines the corresponding age influence value K according to the aforementioned health data, and
  • the age influence value refers to the deviation value that is positively or negatively correlated with the user’s current age; generally, when the user’s health data reflects the healthier living habits, the age influence value K is a negative value. When the user’s health data reflects When the living habits are more unhealthy, the age influence value K is positive; further, the electronic device can predict the second facial image of the user in M+K years based on the first facial image, and obtain a second image containing the second facial image.
  • M is the default value or the value set by the user.
  • the predicted second facial image can not only reflect the change of the user's face with age, but also reflect the influence of the user's current living habits on the user's appearance, thereby providing the accuracy of appearance prediction and the user's experience.
  • the electronic device predicts the user's appearance based on the aforementioned health data and the first facial image to obtain the second image, including: the electronic device predicts the user's first facial image in M years based on the aforementioned first facial image Three facial images, where M is a default value or a value set by the user; further, the electronic device can add a corresponding facial effect to the third facial image according to the above-mentioned health data to obtain the second facial image of the user after M years. For example, when the above-mentioned health data reflects the user's bad living habits, the corresponding one can be added; when the above-mentioned health data reflects the user's healthy living habits, the corresponding youthful effect can be added.
  • the above-mentioned first interface includes a second facial image and a first switch button; wherein, after the electronic device displays the first interface, it further includes: in response to an operation of the user clicking the first switch button, The electronic device switches the displayed second facial image to the first facial image.
  • the above-mentioned first interface includes a second facial image and a second switch button; wherein, after the electronic device displays the first interface, it further includes: in response to an operation of the user clicking the second switch button, The electronic device switches the displayed second image to a standard facial image, the standard facial image corresponding to preset standard health data.
  • the user can intuitively know the impact of the current lifestyle and healthy lifestyle on the future appearance by comparing the two images, thereby reminding and urging the user to establish a healthier lifestyle.
  • displaying the first interface by the electronic device specifically includes: the electronic device marks the appearance change corresponding to the above-mentioned health data in the second facial image on the first interface.
  • the electronic device marks the appearance change corresponding to the above-mentioned health data in the second facial image on the first interface.
  • the above-mentioned first interface may also include methods or suggestions for adjusting the user's living habits, so as to help and guide the user to adjust bad living habits as soon as possible.
  • the above-mentioned first interface may also include an aging progress bar and a slider; the second facial image includes a first predicted image and a second predicted image of the user's face; wherein, the electronic device is in the first interface Displaying the second facial image in, includes: if it is detected that the slider is dragged to the first position of the aging progress bar, the electronic device may display the first predicted image corresponding to the first position, and the first predicted image is the predicted The user’s facial image after the first time period; if it is detected that the slider is dragged to the second position of the aging progress bar, the electronic device can display a second predicted image corresponding to the second position, and the second predicted image is a predicted The facial image of the user after the second period of time.
  • the electronic device can display facial images predicted for the user based on the current user’s living habits at different times in chronological order, so that the user can dynamically feel the changes in facial appearance over time while maintaining the current living habits .
  • the above-mentioned second image may also include a body shape template of the user predicted for the user after a period of time, the body shape template corresponding to its health data, and the body shape template may include a fattening template or a thinning template.
  • the user can intuitively and vividly understand the influence of his own life habits on his body shape in the future, thereby reminding the user to adjust bad life habits in time.
  • the electronic device acquiring the user's health data within a preset time includes: the electronic device acquires the user's health data within the preset time from the wearable device.
  • a second aspect provides a facial appearance prediction method, including: an electronic device acquires health data of a user within a preset time.
  • the health data may include the user's exercise data, sleep data, nutritional intake data, or data on the duration of using the electronic device At least one of; and the electronic device may obtain a first image, and the first image includes a first facial image of the user.
  • the first image may be obtained by the electronic device using a camera, or the first image may be The electronic device obtains a photo from the photo album application; if the above health data does not meet the preset conditions, indicating that the user’s health data has bad lifestyle habits, the electronic device can display the first facial image and The second facial image predicted by the user after a period of time.
  • the second facial image is the predicted image after the first facial image is aged; accordingly, if the above-mentioned health data meets the preset condition, it indicates that the user’s living habits are relatively healthy. Then, the electronic device may display the first facial image and the third facial image predicted for the user after a period of time in one interface, and the third facial image is a predicted image after the first facial image is de-aging.
  • the aforementioned interface may further include a first figure template carrying a first facial image and a second figure template carrying a second facial image.
  • the second body shape template is the result of the first body shape template becoming fat; if the above-mentioned health data meets the preset conditions, the second interface may also include a first body shape template carrying the first facial image and the third facial image
  • the third body shape template is the result of the first body shape template becoming thinner.
  • the foregoing period of time is M years, and M is the default value or a value set by the user; when the foregoing health data does not meet the preset conditions, it further includes: the electronic device is based on the foregoing health data and M years, Perform aging processing on the first facial image to obtain the second facial image of the user after M years; correspondingly, when the above-mentioned health data meets the preset conditions, it also includes: the electronic device performs the correction of the first facial image based on the above-mentioned health data and M years A facial image is subjected to aging processing to obtain the third facial image of the user after M years.
  • the electronic device may also display a notification message that the appearance prediction is completed. If it is detected that the user has opened the notification message, the electronic device may open the prediction application and display the aforementioned second facial image.
  • the electronic device when the electronic device displays the above-mentioned second facial image, it may also mark changes in appearance corresponding to the above-mentioned health data in the second facial image. In this way, the user can intuitively understand how the current bad life habits will specifically affect the appearance, so as to remind the user to adjust the bad life habits in time.
  • the electronic device when the electronic device displays the above-mentioned second facial image, it may also display methods or suggestions for adjusting the user's living habits, so as to help and guide the user to adjust bad living habits as soon as possible.
  • a third aspect provides an electronic device, including: a touch screen, one or more processors, one or more memories, and one or more computer programs; wherein the processor is coupled with the touch screen and the memory, and one or more of the foregoing
  • the computer program is stored in the memory, and when the electronic device is running, the processor executes one or more computer programs stored in the memory, so that the electronic device executes any one of the aforementioned facial prediction methods.
  • a fourth aspect provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute any one of the aforementioned facial prediction methods.
  • a fifth aspect provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute any one of the aforementioned facial prediction methods.
  • a sixth aspect provides a graphical user interface (GUI), which is stored in the above-mentioned electronic device.
  • the electronic device includes a touch screen, a memory, and a processor, and the processor is configured to execute the One or more computer programs in the memory
  • the graphical user interface may include: a first GUI displayed on the touch screen, and the first GUI includes a button with a facial appearance prediction function; in response to a touch event directed to the button, on the touch screen Display the second GUI, the second GUI includes the user's first facial image and the second facial image, or the second GUI includes the user's second facial image; where the first facial image is the user's real facial image
  • the second facial image is a facial image that the electronic device predicts for the user after a period of time based on the user's health data and the first facial image.
  • a seventh aspect provides a GUI, the GUI is stored in the above electronic device, the electronic device includes a touch screen, a memory, and a processor, and the processor is configured to execute one or more computer programs stored in the memory.
  • the graphical user interface may include: a first GUI displayed on the touch screen, the first GUI including a notification message that the appearance prediction is completed; in response to a touch event for the above notification message, a second GUI is displayed on the touch screen, and the second GUI Including the user’s first facial image and the second facial image, or, the second GUI includes the user’s second facial image; where the first facial image is the real facial image of the user, and the second facial image is based on the electronic device
  • the user's health data and the first facial image are the facial images predicted by the user after a period of time.
  • the electronic equipment described in the third aspect, the computer storage medium described in the fourth aspect, the computer program product described in the fifth aspect, and the GUI described in the sixth and seventh aspects provided above are all used for The corresponding method provided above is executed. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method provided above, which will not be repeated here.
  • FIG. 1 is a first structural diagram of an electronic device according to an embodiment of the application
  • FIG. 2 is a schematic diagram of a photographing principle of a camera provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram 1 of an application scenario of a face prediction method provided by an embodiment of this application;
  • FIG. 4 is a first schematic flowchart of a face prediction method provided by an embodiment of the application.
  • FIG. 5 is a second schematic diagram of an application scenario of a face prediction method provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram 3 of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 7 is a fourth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram 5 of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 9 is a sixth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 10 is a schematic diagram 7 of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram eight of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 12 is a schematic diagram 9 of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 13A is a tenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • 13B is a schematic diagram eleventh of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 14 is a twelfth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • 15 is a thirteenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • 16 is a fourteenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 17 is a schematic diagram 15 of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 18 is a sixteenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 19 is a seventeenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application.
  • 20 is a schematic diagram eighteenth of an application scenario of a face prediction method provided by an embodiment of this application.
  • FIG. 21 is a second schematic flowchart of a facial appearance prediction method provided by an embodiment of this application.
  • FIG. 22 is a second structural diagram of an electronic device provided by an embodiment of this application.
  • the appearance prediction method provided by the embodiments of the present application can be applied to mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, and personal digital assistants (personal digital assistants).
  • UMPC ultra-mobile personal computers
  • PDA digital assistant
  • wearable electronic devices virtual reality devices, and other electronic devices, the embodiments of the present application do not make any restrictions on this.
  • FIG. 1 shows a schematic structural diagram of the mobile phone 100.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, RF module 150, communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display 194, and user identification module (subscriber identification module, SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, RF module 150, communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display 19
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, which includes a serial data line (SDA) and a derail clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the mobile phone 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the communication module 160.
  • the processor 110 communicates with the Bluetooth module in the communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 may transmit audio signals to the communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the mobile phone 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the mobile phone 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the mobile phone 100, and can also be used to transfer data between the mobile phone 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the mobile phone 100.
  • the mobile phone 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the mobile phone 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the mobile phone 100 can be implemented by the antenna 1, the antenna 2, the radio frequency module 150, the communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the radio frequency module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the mobile phone 100.
  • the radio frequency module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the radio frequency module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the radio frequency module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the radio frequency module 150 may be provided in the processor 110.
  • at least part of the functional modules of the radio frequency module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the radio frequency module 150 or other functional modules.
  • the communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems ( Global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite systems
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the communication module 160 may be one or more devices integrating at least one communication processing module.
  • the communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the mobile phone 100 is coupled with the radio frequency module 150, and the antenna 2 is coupled with the communication module 160, so that the mobile phone 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the mobile phone 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the mobile phone 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the mobile phone 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the mobile phone 100 may include 1 or N cameras, and N is a positive integer greater than 1.
  • the camera 193 may be a front camera or a rear camera. As shown in FIG. 2, the camera 193 generally includes a lens and a sensor.
  • the photosensitive element may be a CCD (charge-coupled device) or a CMOS (complementary metal oxide semiconductor, complementary metal oxide semiconductor). ) And other arbitrary photosensitive devices.
  • the reflected light of the object being photographed can generate an optical image after passing through the lens.
  • the optical image is projected onto the photosensitive element, and the photosensitive element converts the received light signal into an electrical signal, and further,
  • the camera 193 sends the obtained electrical signal to a DSP (Digital Signal Processing) module for digital signal processing, and finally a digital image is obtained.
  • the digital image can be output on the mobile phone 100 through the display screen 194, or the digital image can be stored in the internal memory 121 (or the external memory 120).
  • Video codecs are used to compress or decompress digital video.
  • the mobile phone 100 may support one or more video codecs. In this way, the mobile phone 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the mobile phone 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • UFS universal flash storage
  • the mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the mobile phone 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the mobile phone 100 answers a call or a voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the mobile phone 100 may be provided with at least one microphone 170C. In other embodiments, the mobile phone 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the mobile phone 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the mobile phone 100 can receive key input, and generate key signal input related to user settings and function control of the mobile phone 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be connected to and separated from the mobile phone 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195.
  • the mobile phone 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the mobile phone 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the mobile phone 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the mobile phone 100 and cannot be separated from the mobile phone 100.
  • the mobile phone 100 can detect the user's health data through one or more sensors in the sensor module 180, and the health data can reflect the user's life habit characteristics.
  • the health data may include the user’s daily exercise data (such as exercise time, exercise volume, etc.), sleep data (such as time to fall asleep, sleep duration, etc.), nutritional intake data (such as calories intake, meal time, etc.), and usage One or more items of data such as the time and duration of the phone.
  • the mobile phone 100 may store the detected health data of the most recent period of time (for example, the most recent six months, the most recent month, the most recent week, the most recent day, or the most recent several hours) in the external memory 120 or the internal memory 121 of the mobile phone 100.
  • the mobile phone 100 may also interact with the wearable device 200 through the communication module 160.
  • the wearable device 200 may be a smart watch, a smart bracelet, smart glasses, a smart helmet, or a smart headset, and the embodiment of the present application does not impose any limitation on this.
  • the wearable device 200 can send the collected health data of the user to the mobile phone 100.
  • the user can also manually input his own health data into the mobile phone 100, and the embodiment of the present application does not impose any restriction on this.
  • an application with a facial appearance prediction function may be installed in the mobile phone 100. If it is detected that the user has turned on the facial prediction function in the prediction APP, the mobile phone 100 can obtain data containing the user's facial image, and the mobile phone 100 can obtain the user's health data within a preset time period (recent period). Furthermore, in combination with the user's recent health data, the mobile phone 100 can predict the change in the user's appearance after a period of time (for example, 1 year, 3 years, 5 years, or 10 years) based on the user's facial image, thereby generating an image containing the result of the appearance change And display the image to the user.
  • a period of time for example, 1 year, 3 years, 5 years, or 10 years
  • the appearance change predicted by the mobile phone 100 for the user can change with the user's health data, that is, the appearance change predicted by the mobile phone 100 for the user is closely related to the user's actual living habits. For example, if the user’s actual life habits are relatively healthy, the mobile phone 100 predicts that the user’s appearance will age relatively slowly; if the user’s actual life habits are not healthy, the user’s appearance predicted by the mobile phone 100 will be relatively slow. fast. It can be seen that the mobile phone 100 not only considers the factors of the passage of time when predicting the user's appearance for the user, but also combines the user's health data, so that the prediction result of the user's appearance is more accurate.
  • the mobile phone 100 when the mobile phone 100 displays the predicted image of the user after aging, it can also remind the user of the current unhealthy living habits, or remind the user of how these unhealthy living habits affect the appearance. In this way, the user can more intuitively and vividly learn the impact of the current living habits on the appearance, thereby reminding and urging the user to establish healthier living habits through the way of appearance prediction.
  • the method may include steps S401-S404.
  • the mobile phone acquires a first image, where the first image contains a facial image of the user (also referred to as a first facial image).
  • an APP with a facial appearance prediction function may be installed in the mobile phone.
  • the prediction APP may be a photo APP, a Meitu APP, a sports APP, or a health APP.
  • This application does not impose any limitation on this.
  • the face prediction function can also be set as a function option in the negative screen of the mobile phone or the drop-down menu of the mobile phone.
  • the mobile phone can display the interactive interface 501 shown in FIG. 5.
  • the user clicks the prediction APP as an example for description, but it does not constitute a limitation.
  • the mobile phone can start the prediction APP and display the interactive interface 501 of the prediction APP.
  • the interactive interface 501 can be provided with a button for predicting facial features, such as the measuring button 502 shown in FIG. 5, and the button 502 can be used to enable the facial predicting function. If it is detected that the user clicks the button 502, the mobile phone can call the camera APP to open the camera to capture the current shooting picture.
  • the mobile phone can display the captured image 601 in the preview interface 602.
  • the mobile phone can prompt the user to move the mobile phone to input the face into the shooting screen 601.
  • the mobile phone may prompt the user to use the front camera to take a facial image through text in the preview interface 601.
  • the mobile phone may prompt the user to look up at the rear camera in the form of voice, and adjust the distance between the mobile phone and the user, so that the mobile phone can capture the user's facial image in the shooting screen 601, that is, the first facial image.
  • the mobile phone can use the preset face detection algorithm when capturing the shooting screen 601 to identify whether the shooting screen 601 contains a face of a preset size. If it detects that the shooting screen 601 contains a face of a preset size, the phone can automatically Perform a photographing operation to obtain an image (ie, a first image) in the currently photographed screen 601, and the first image contains a facial image of the user (ie, a first facial image).
  • the user can also manually click the camera button 603 in the preview interface 602. In response to the user's operation of clicking the camera button 603, the mobile phone can save the captured image 601 acquired at this time as the first image in the memory.
  • the mobile phone may also prompt the user to select a photo containing the user's face from the album after detecting that the user has turned on the aforementioned facial prediction function. Furthermore, the mobile phone can extract the user's facial image (that is, the first facial image) from the photo selected by the user through a face detection algorithm.
  • the above-mentioned first facial image is still obtained by the mobile phone from the server or other electronic devices, and the embodiment of the present application does not impose any limitation on this.
  • the mobile phone can also prompt the user to enter his current age. In this way, the mobile phone can subsequently predict the user's appearance based on the user's current age. For another example, as shown in FIG. 7, the mobile phone may also prompt the user to select how long the appearance change needs to be predicted. For example, the user may choose to predict his appearance in 1 year, 5 years, or 10 years. Of course, the mobile phone can also automatically recognize the age of the user based on the first image acquired, or the mobile phone can predict the user’s appearance after a certain period of time (for example, 5 years) based on the acquired first image. There are no restrictions.
  • the mobile phone obtains the user's health data within a preset time.
  • the mobile phone After the mobile phone detects that the user has turned on the above facial prediction function, it can also obtain the user's health data in the most recent period of time (for example, one week, one month, three months, six months, or one year).
  • the health data can reflect the characteristics of the user's living habits.
  • a mobile phone can record health data such as the user's daily work and rest time, sleep quality, meal time, calorie intake, exercise time, and exercise volume.
  • the mobile phone may also obtain one or more items of health data of the user detected by the wearable device from the wearable device of the user.
  • the mobile phone can also record various health data manually entered by the user. Then, after detecting that the user has turned on the aforementioned facial prediction function, the mobile phone can obtain various health data recorded by the mobile phone within a preset time period. For example, the mobile phone can generate an n (n>0)-dimensional matrix according to the acquired n items of health data, and each dimensional vector in the matrix corresponds to one item of health data.
  • matrix A contains the user's daily health data acquired by the mobile phone in the last month (30 days).
  • the health data includes the user's daily sleep time, walking steps, calorie intake, and mobile phone usage time.
  • Each row vector in matrix A corresponds to a piece of health data.
  • the mobile phone can obtain the user's health data in the last month after obtaining the user's first image.
  • the mobile phone can also monitor the user's health data in the last month in real time. If unhealthy behaviors are detected in the user's health data, for example, if it is detected that the user sleeps for less than 5 hours in a continuous week, the mobile phone can automatically turn on
  • the above-mentioned facial appearance prediction function obtains the most recently taken photo containing the user's facial image from the album, and then performs the following steps S403-S404 to remind the user of the impact of unhealthy behaviors on the appearance.
  • the mobile phone predicts the appearance of the facial image in the first image based on the aforementioned health data and a preset number of years (for example, 5 years).
  • the mobile phone when the mobile phone predicts the user's appearance change in 5 years, in addition to considering the influence of time on the user's appearance, it is also based on the information obtained in step S402.
  • the user's health data predicts an image of the user's face in 5 years (also called a second facial image).
  • the mobile phone can predict for the user the impact on the user's appearance when this lifestyle continues based on the current user's living habits, thereby reminding and urging the user to establish a healthier lifestyle.
  • the mobile phone can extract the user's life habit characteristics from n items of health data of the user in the most recent period of time (for example, one month).
  • the mobile phone can calculate the characteristic value of each item of health data in n items of health data.
  • the mobile phone can calculate the user’s daily average sleep time b 1 in the last month; for the health data of walking steps, the mobile phone can calculate The user’s daily average number of walking steps b 2 in the last month; for the health data of calorie intake, the mobile phone can calculate the user’s daily average calorie intake in the last month b 3 ; 1.
  • Health data the mobile phone can calculate the user's daily average mobile phone usage time b 4 in the last month.
  • mobile phones can also use other algorithms to extract corresponding feature values from each item of health data.
  • the mobile phone can obtain an n-dimensional feature vector B, and each value in the n-dimensional feature vector B represents a feature value of a piece of health data.
  • the entire n-dimensional feature vector can reflect the characteristics of the user's living habits. For example, if the user's sleep time is later than 12 o'clock in the n-dimensional feature vector, the user has a living habit characteristic of going to bed late.
  • the mobile phone can determine the age influence value corresponding to the user's life habit characteristics based on the user's life habit characteristics.
  • Age influence value refers to the deviation value that is positively or negatively correlated with facial features of a certain fixed age. For example, for a 28-year-old user, if the user's living habit characteristics are relatively healthy, the user's facial image may show a 26-year-old facial feature, that is, the age influence value at this time is -2 years old. Correspondingly, for a 28-year-old user, if the user's life habit characteristics are not healthy, the user's facial image may show a 31-year-old facial feature, that is, the age influence value at this time is 3 years old. Then, in mode one, the mobile phone can determine the age influence value corresponding to the life habit characteristic of the user according to the determined life habit characteristic of the user.
  • a mobile phone or a server can collect a large number of facial images of users of different ages and different life habits for machine learning and training, thereby establishing an input and output model of different life habits characteristics-different age influence values.
  • the input-output model can output the corresponding age influence value.
  • the healthier the user's living habits the smaller the corresponding age influence value, and the less healthy the user's living habits, the larger the corresponding age influence value.
  • the above-mentioned input and output model can be set in the mobile phone or in the server.
  • the mobile phone can send the determined characteristics of the user's life habit to the server, and the server uses the input and output model to determine the corresponding age influence value.
  • the mobile phone may also send the acquired health data to the server, and the server extracts the living habits characteristics in the health data, and determines the corresponding age influence value.
  • the user's health data can also be stored in the server, and the mobile phone can send a prediction instruction to the server, so that the server obtains the user's health data, and determines the age influence value corresponding to the user's lifestyle characteristics according to the above method.
  • the mobile phone can predict the facial image 802 when the user is 30 years old based on the facial image in the first image, and display the predicted result to the user as the facial image when the user is 32 years old.
  • the mobile phone determines that the corresponding age influence value is 3 years old, it means that when the user is 32 years old, the face actually presents the facial features of 35 years old. Then, the mobile phone can predict the user's face image at the age of 35 based on the face image in the first image, and display the predicted result to the user as the user's face image at the age of 32.
  • the mobile phone may input the facial image in the first image into the preset prediction model, and input the current age of the user and the target age that needs to be predicted into the prediction model.
  • the target age is the result of the superposition of the age that the user actually needs to predict and the above-mentioned age influence value.
  • the actual age that the user needs to predict is 30 years old.
  • the mobile phone determines the corresponding age influence value according to the user's health data as 2 years old, the target age that the mobile phone needs to predict this time is 32 years old.
  • the mobile phone determines that the corresponding age influence value is -2 years old according to the user's health data
  • the target age that the mobile phone needs to predict this time is 28 years old.
  • the prediction model may use an aging processing algorithm to predict the user's facial image at the target age based on the user's facial image in the first image and the current age, and obtain a second image containing the user's face after aging processing.
  • a mobile phone or server can create a sample pool, which contains a large number of facial images of users of different ages. Furthermore, the mobile phone or the server can perform in-depth learning based on facial images of users of different ages, so as to establish the aforementioned prediction model. For example, the server may train and learn facial images of users of different ages based on Generative Adversarial Nets (GAN) to establish the above prediction model.
  • GAN Generative Adversarial Nets
  • GAN includes a generative model (generative model) and a discriminative model (discriminative model).
  • the user's current real facial image (such as the first image) and target age label can be input into the generation model, and the generation model can generate a face corresponding to the target age according to the user's current real facial image and target age label.
  • the discrimination model can determine whether the facial prediction image output by the generation model is a real image or a generated image based on the current user's real facial image, target age label, and facial prediction image output by the generation model.
  • the generative model can generate images that are "false and true", so that the discriminant model cannot judge the authenticity of the facial image output by the generative model.
  • the facial image output by the generation model is the facial image that the user has aged at the target age.
  • the mobile phone when the mobile phone predicts the user's facial image in M years, the user's health data can be combined with the user's health data to determine the age influence value X of the user's living habits. Furthermore, the mobile phone can actually predict the face image for the user in M+X years, so that the predicted face image can not only reflect the change of the user’s face with age, but also reflect the impact of the user’s current living habits on the user’s appearance , So as to provide the accuracy of facial prediction and user experience.
  • the mobile phone can also input the facial image in the first image into the prediction model, and input the current age of the user and the target age to be predicted into the prediction model.
  • the target age refers to the actual age selected by the user after M years or the user's age after a certain period of time by default on the mobile phone.
  • the mobile phone can use the above prediction model based on the user’s facial image obtained in step S401 and the user’s current age (for example, 27 Years old), predict the facial image 901 after 5 years (that is, when the user is 32 years old).
  • the mobile phone can add a corresponding facial effect to the predicted facial image 901 based on the acquired health data of the user.
  • the appearance effect may include one or more of changes in skin luster, changes in skin color, changes in wrinkles, changes in pigmentation, or changes in face shape.
  • a mobile phone or a server can create a sample pool, which contains a large number of facial images of users with different lifestyle characteristics. Furthermore, the mobile phone or the server can perform in-depth learning based on the facial images of users with different life habit characteristics, so as to establish the correspondence between different life habit characteristics and different facial effects. For example, when the user has the habit of going to bed late, the corresponding appearance effect of dark circles; when the user has the habit of overeating, the corresponding appearance effect of facial obesity.
  • the mobile phone After the mobile phone extracts the corresponding life habit characteristics from the user's health data, it can query the appearance effect corresponding to the life habit characteristics extracted this time in the local or server. Furthermore, as shown in FIG. 11, the mobile phone can add these facial effects to the predicted facial image 901 of the user after M years, so that the mobile phone can finally obtain the user's prediction for the user after M years based on the user's current living habits. Face image 902.
  • two models can be set in the mobile phone, one is the facial image prediction model 1 based on age (that is, the above prediction model), and the other is the facial image prediction based on lifestyle habits.
  • Model 2 (ie the above-mentioned correspondence).
  • the mobile phone After the mobile phone extracts the user's facial image from the first image, the mobile phone can input the facial image in the first image, the user's current age, and the target age to be predicted into the prediction model (ie, prediction model 1). In this way, the mobile phone uses the prediction model 1 to predict the facial image 901 of the user at the target age over time.
  • the mobile phone can input the n-dimensional feature vector extracted by the mobile phone from n items of health data into the prediction model 2, and the n-dimensional feature vector can reflect the user's living habits.
  • the facial image 901 output by the prediction model 1 can also be input to the prediction model 2. In this way, on the basis of the facial image 901, the mobile phone can use the prediction model 2 to predict the user's facial image 902 in M years based on the user's living habits.
  • the mobile phone when the mobile phone predicts the user's facial image in M years, it can combine the user's health data to add the corresponding facial image to the facial image predicted by the mobile phone for the user in M years, so that the prediction is
  • the facial image can not only reflect the changes of the user’s face with age, but also reflect the impact of the user’s lifestyle on the user’s appearance, thereby providing the accuracy of facial prediction and the user’s experience.
  • the mobile phone or the server can create a prediction model that uses the two parameters of age and life habit characteristics as variables to predict facial aging images corresponding to different ages and different life habit characteristics.
  • the server may establish a face photo library of different age groups and a face photo library of different living habits.
  • the server can mine and learn the image features in the two face photo libraries through a deep learning algorithm, so as to establish a prediction model of the interaction between age, living habits, and facial images.
  • the mobile phone can extract the user's facial image from the first image obtained in step S401, and the mobile phone can extract the user's life habit characteristics from the health data obtained in step S402. Furthermore, the mobile phone can input the user's facial image, life habit characteristics, and the target age that needs to be predicted into the prediction model to obtain the user's facial image predicted for the user in M years based on the user's current life habits.
  • the above-mentioned method 1 to method 3 only illustrate how to predict the facial image of the user in M years for the user based on the user's living habits. It is understandable that a person skilled in the art can set a specific algorithm, model, or implementation method for predicting a user's facial image in M years based on the user's living habits according to actual application scenarios or actual experience, and the embodiment of the present application does not impose any limitation on this.
  • the mobile phone displays a second image, and the second image contains the facial image of the user obtained after facial appearance prediction.
  • a second image can be obtained.
  • the second image includes the facial image predicted for the user after a period of time (ie, the second facial image).
  • the facial image in the second image is associated with the user's health data.
  • step S404 as shown in (a) of FIG. 13A, the mobile phone can display the second image 1002 in the interface 1001 of the prediction APP, so that the user can intuitively and vividly see if the current lifestyle habits are maintained What impact will it have on the appearance of users after M years?
  • a button 1003 corresponding to the current time and a button 1004 corresponding to the M years (for example, 5 years) in which the prediction is needed may also be set in the interface 1001 of the prediction APP. If it is detected that the user clicks on the button 1003, as shown in (b) in FIG. 13A, the mobile phone can display the first image 1006 containing the user's facial image obtained when predicting the user's appearance this time. If it is detected that the user clicks the button 1004, as shown in (a) of FIG. 13A, the mobile phone can display the second image 1002 of the user's face predicted this time in combination with the user's health data in 5 years.
  • the mobile phone can also use text 1005 in the interface 1001 of the prediction APP to prompt the user of the specific impact of the current living habits on the user’s appearance, thereby reminding the user to adjust badly in time living habit.
  • the mobile phone can also display the current user's facial image (ie the first image 1006) and the user's facial image (ie the second image) predicted by the mobile phone 5 years later in the interface 1001 of the prediction APP. 1002).
  • the user can intuitively compare the changes between the current appearance and the appearance after 5 years, thereby understanding the specific impact of the current bad living habits on the appearance.
  • the specific problem caused by the user's bad living habits may be marked on the specific location of the user's face. For example, if the user’s health data indicates that the user has a living habit of going to bed late, and this lifestyle will increase facial wrinkles, then the mobile phone can add a mark 1101 to the wrinkle area of the user’s face when displaying the second image 1002 to remind the user Sleeping late can exacerbate facial wrinkles. In this way, the user can intuitively understand how the current bad life habits will specifically affect the appearance, so as to remind the user to adjust the bad life habits in time.
  • the mobile phone can prompt the user to adjust the specific method or suggestion of related bad living habits through text, voice, etc., so as to help and guide the user as soon as possible Adjust bad habits.
  • the mobile phone can also show the user how the user's facial appearance changes after different times.
  • the mobile phone can set an aging progress bar 1201 and a slider 1202 in the interface 1001 of the prediction APP.
  • the user can drag the slider 1202 in the interface 1001 to slide on the aging progress bar 1201.
  • Figure 15 (a) if it is detected that the user drags the slider 1202 to the middle point A of the aging progress bar 1201, the mobile phone can display the user’s predicted 5 years later in the interface 1001 Facial image.
  • the mobile phone can display the facial image predicted by the user in 10 years on the interface 1001.
  • the specific method for the mobile phone to predict the user's facial image after M years based on the user's health data can be referred to the related description of step S403, so it will not be repeated here.
  • the mobile phone after the mobile phone predicts the user's facial image A after M years, if the mobile phone needs to predict the user's facial image B after M+T years, the mobile phone can multiply the pixel value of each pixel unit in the facial image A by a corresponding The scale factor w is used to obtain the facial image B of the user after M+T years.
  • the mobile phone can display facial images predicted for the user based on the current user’s living habits at different times in chronological order, so that the user can dynamically Feel the changes in facial appearance over time while maintaining current lifestyle habits.
  • the mobile phone can also display the facial image predicted by the mobile phone for the user M years later after the user adjusts the bad lifestyle habits to healthy lifestyle habits.
  • the mobile phone may also set a healthy life button 1301 in the interface 1001 of the prediction APP.
  • the mobile phone displays the second image 1002 of the user 5 years later based on the user’s current living habits, if it detects that the user clicks the button 1301, the mobile phone can predict the user 5 based on the first image and preset standard health data. Face image 1302 after the year.
  • the preset standard health data may be health data of users with relatively healthy living habits collected by mobile phones or servers.
  • the mobile phone in response to the user's operation of clicking the button 1301, as shown in (b) of FIG. 16, the mobile phone can display the facial image 1302 predicted for the user in 5 years based on the standard health data in the interface 1001 of the prediction APP. It can be seen that by comparing the two images, the user can intuitively know the impact of the current lifestyle and healthy lifestyle on the future appearance, thereby reminding and urging the user to establish a healthier lifestyle.
  • the mobile phone can also show the user the effect of maintaining a healthy lifestyle for different lengths of time on the facial appearance of the user.
  • the mobile phone is provided with a first progress bar 1401 and a second progress bar 1402 in the interface 1001 of the prediction APP.
  • the first progress bar 1401 is used to indicate the aging progress when the user maintains the current life habit
  • the second progress bar 1402 is used to indicate the aging progress when the user maintains the standard healthy life habit.
  • the user can drag the slider 1403 to slide on the first progress bar 1401 and the second progress bar 1402.
  • the mobile phone when the user drags the slider 1403 to slide on the first progress bar 1401, the mobile phone can display the predictions for the user at different times based on the current user’s living habits.
  • the facial image of the user can dynamically feel the changes of facial appearance over time while maintaining the current living habits.
  • the mobile phone when the user drags the slider 1403 to slide on the second progress bar 1402, the mobile phone can display the predicted faces for the user at different times in chronological order based on standard healthy living habits The image allows users to dynamically feel the changes in facial appearance over time when improving current bad habits.
  • the mobile phone may also save the facial images predicted for the user one or more times recently, and the corresponding living habits characteristics. For example, on October 1, 2018, the mobile phone predicted the facial image A for the user 3 months later based on the user's living habits at that time. The mobile phone can save the facial image A and the corresponding lifestyle feature 1. If it is detected around January 1, 2019 that the user has turned on the facial prediction function again, the mobile phone can obtain the current user's facial image B and the user's current lifestyle characteristics2. Furthermore, by comparing the facial image A and the facial image B, and comparing the life habit feature 1 and the life habit feature 2, the mobile phone can analyze the specific impact of the change in the user's life habit on the user's facial appearance.
  • the mobile phone when the mobile phone displays the facial image B acquired this time in the interface 1501 of the prediction APP, it can also prompt the user of the change in living habits and the change in living habits in the recent period. Changes to appearance.
  • the mobile phone can display facial images of the same period that have been predicted for the user based on the user's historical living habits ( For example, the aforementioned facial image A) allows users to intuitively see the impact of changes in their own habits on their appearance.
  • the mobile phone in addition to predicting changes in the user's facial appearance based on the user's health data, can also predict changes in the user's body shape, such as changes in weight, obesity, hunchback, and O-legs.
  • a mobile phone or a server can use GAN to train the correspondence between different lifestyle characteristics and different body shape data. Then, when the mobile phone displays the user's facial image before and after the prediction, it can also load the user's facial image on the corresponding figure template and display it to the user.
  • the real image containing the user's real body shape and the target weight label can be input into the GAN generation model.
  • the generated model can be based on the user's current real body shape and target weight.
  • the tag generates a body shape prediction image corresponding to the target weight, and the discriminant model can determine whether the body shape prediction image output by the generation model is a real image or a generated image based on the user's current real body shape, target weight label, and the body shape prediction image output by the generation model .
  • the generative model can generate images that are "false and true", so that the discriminant model cannot judge the authenticity of the facial image output by the generative model.
  • the facial image output by the generating model is the body shape template image corresponding to the user's weight as the target weight.
  • the mobile phone when the mobile phone displays the facial image 1601 acquired this time in the interface 1501 of the prediction APP, the facial image 1601 can be loaded on the preset body shape template 1602 for display . If it is detected that the user clicks the button 1600 in 5 years, the mobile phone can predict the user's facial image 1603 based on the user's health data, and the mobile phone can also predict the user's body shape data in 5 years based on the user's health data . Furthermore, as shown in (b) of FIG. 20, the mobile phone may load the predicted facial image 1603 on the body shape template 1604 corresponding to the predicted body shape data for display. In this way, the user can intuitively and vividly understand the influence of his own life habits on his body shape in the future, thereby reminding the user to adjust bad life habits in time.
  • the mobile phone can also actively push the predicted facial image to the user based on the acquired health data. For example, as shown in Figure 21, after the mobile phone obtains the user's health data within the most recent preset time (for example, the most recent month), the mobile phone can determine whether the health data meets the preset conditions.
  • the preset condition may be that the health data is greater than a certain preset value or less than a certain preset value. For example, if it is detected in the user's health data that the user's sleep time for a continuous week is less than 5 hours, the mobile phone can determine that the health data does not meet the preset conditions.
  • the mobile phone can automatically turn on the aforementioned facial prediction function to obtain the user's real facial image (i.e., the first facial image). Furthermore, by performing the above steps S403-S404, the mobile phone can predict the user's facial image (i.e., the first facial image) in M years. Two facial images). Since the user's health data does not meet the preset conditions, indicating that the user's current living habits are unhealthy, the second facial image predicted by the mobile phone for the user based on the health data is the result of the user's facial aging.
  • the mobile phone can determine that the user’s health data satisfies the above preset conditions, the mobile phone can also automatically turn on the above facial prediction function to obtain the user’s real facial image (ie, the first facial image), and predict by performing the above steps S403-S404 The facial image of the user M years later (that is, the third facial image) is displayed.
  • the third facial image predicted by the mobile phone for the user based on the health data is the result of the user's face de-aging.
  • the mobile phone can actively push a notification message that the facial prediction is completed to the user. Furthermore, if it is detected that the user has opened the notification message, the mobile phone can display the second facial image or the third facial image predicted by the user in M years in the interface of the prediction APP, thereby reminding the user of the current lifestyle habits and appearance. The specific impact caused.
  • the mobile phone may also display the above-mentioned first facial image to show the user the comparison effect before and after the facial prediction.
  • the mobile phone when displaying the predicted second facial image or the third facial image, can also display the body shape template corresponding to the second facial image or the third facial image, thereby prompting the user of the specific influence of the current living habits on the body shape
  • the embodiment of this application does not impose any restriction on this.
  • the mobile phone can also predict the user’s health risks, potential diseased parts and other health problems based on the user’s health data, reminding the user to pay attention to current bad habits and promptly For improvement, the embodiment of this application does not impose any restriction on this.
  • an embodiment of the present application discloses an electronic device, including: a touch screen 2201, the touch screen 2201 includes a touch-sensitive surface 2206 and a display screen 2207; one or more processors 2202; a memory 2203; one or more One application program (not shown); and one or more computer programs 2204.
  • the above-mentioned devices can be connected through one or more communication buses 2205.
  • the one or more computer programs 2204 are stored in the aforementioned memory 2203 and configured to be executed by the one or more processors 2202, and the one or more computer programs 2204 include instructions, and the aforementioned instructions can be used to execute the aforementioned Each step in the embodiment should be implemented.
  • the foregoing processor 2202 may specifically be the processor 110 shown in FIG. 1
  • the foregoing memory 2203 may specifically be the internal memory 121 and/or the external memory 120 shown in FIG. 1
  • the foregoing display screen 2207 may specifically be FIG.
  • the aforementioned sensor 2208 may specifically be one or more sensors in the sensor module 180 shown in FIG. 1
  • the aforementioned touch-sensitive surface 2206 may specifically be the touch sensor 180K in the sensor module 180 shown in FIG.
  • the embodiment of this application does not impose any restriction on this.
  • this application also provides a graphical user interface (GUI), which can be stored in an electronic device.
  • GUI graphical user interface
  • the electronic device may be the electronic device shown in FIG. 1 or FIG. 22.
  • the above-mentioned graphical user interface includes: a first GUI displayed on the touch screen, and the first GUI includes buttons for a facial prediction function; for example, the first GUI may be the interface 501 of the prediction application shown in FIG. 5, The interface 501 includes a button 502 for the facial prediction function. If it is detected that the user has performed a touch event on the button, the electronic device may obtain the user's health data and the current user's real first facial image, such as the image 601 shown in FIG. 6. Furthermore, the electronic device may predict the second facial image of the user in M years based on the first facial image and health data. Then, the foregoing graphical user interface may also include a second GUI displayed on the touch screen.
  • the second GUI may be the interface 1001 shown in FIG. 13B.
  • the second GUI includes the user's first facial image 1006 and second facial image.
  • Image 1002 or, the second GUI may be the interface 1001 shown in (a) of FIG. 13A, and the second GUI includes the user's second facial image 1002.
  • the above-mentioned graphical user interface includes: a first GUI displayed on the touch screen, and the first GUI includes a notification message that the appearance prediction is completed; for example, after the electronic device obtains the user's health data, if it detects health If the data does not meet the preset condition, the electronic device can predict the user's aging second facial image in M years based on the user's first facial image and health data; or, if the health data is detected to meet the preset condition, the electronic device The device can predict the de-aging second facial image of the user in M years based on the user's first facial image and health data.
  • the electronic device may display the aforementioned notification message; if it is detected that the user performs a touch event on the notification message, the electronic device may display a second GUI on the touch screen, the second GUI including a second facial image predicted for the user.
  • the second GUI may also include the user's first facial image, etc., which is not limited in the embodiment of the present application.
  • the functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • a computer readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Abstract

Provided are a facial appearance prediction method and an electronic device, related to the technical field of image processing, allowing a vivid simulation of changes in the facial appearance of a user on the basis of lifestyle habits of the user, enabling the user to intuitively perceive the changes in the facial appearance, thus reminding the user to adjust poor lifestyle habits in a timely manner. The method comprises: an electronic device acquires a first image, the first image comprising a first facial image of a user; the electronic device acquires health data of the user in a preset time, the health data comprising at least one of exercise data, sleep data, nutritional intake data, or data on the length of time that the user spends on using an electronic device; the electronic device makes a prediction with respect to the facial appearance of the user on the basis of the health data and of the first facial image to produce a second image, the second image comprising a second facial image of the user; and the electronic device displays a first interface, the first interface comprising the second facial image, or, the first interface comprising the first facial image and the second facial image.

Description

一种容貌预测方法及电子设备Appearance prediction method and electronic equipment
本申请要求在2018年2月26日提交中国国家知识产权局、申请号为201910142345.3、发明名称为“一种容貌预测方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office of China, the application number is 201910142345.3, and the invention title is "a method and electronic device for facial appearance prediction" on February 26, 2018. The entire content is incorporated by reference in In this application.
技术领域Technical field
本申请涉及图像处理技术领域,尤其涉及一种容貌预测方法及电子设备。This application relates to the field of image processing technology, and in particular to a facial prediction method and electronic equipment.
背景技术Background technique
随着图像处理技术的飞速发展,目前已有一些服务商可向用户提供对用户面部图像进行老化或去老化处理的功能。With the rapid development of image processing technology, currently some service providers can provide users with the function of aging or de-aging the user's facial images.
例如,一些美图应用(APP)可以美化用户面部图像中出现的皱纹、色斑和痘痘等问题,使美化后的用户面部图像更加年轻。又例如,一些应用提供了预测用户容貌衰老后的面部图像的功能。这类应用可获取包含用户面部图像的照片,进而,应用可依据时间的推移使用老化算法对该面部图像进行老化处理,从而得到若干时间(例如10年、20年)后用户面部衰老的图像。但是,影响用户容貌衰老的因素有多种,单纯的依据时间的推移对预测用户的容貌并不准确,用户也无法及时获知造成容貌衰老的原因。For example, some Meitu applications (APP) can beautify the wrinkles, spots, and acne that appear in the user's facial image, and make the beautified user's facial image younger. For another example, some applications provide the function of predicting the facial image of the user's appearance after aging. This type of application can obtain a photo containing an image of the user's face, and further, the application can use an aging algorithm to age the facial image according to the passage of time, so as to obtain an image of the user's facial aging after a certain period of time (for example, 10 years, 20 years). However, there are many factors that affect the appearance of users, and it is not accurate to predict the appearance of users based on the passage of time, and users cannot know the cause of appearance aging in time.
发明内容Summary of the invention
本申请提供一种容貌预测方法及电子设备,可基于用户的生活习惯真实模拟用户容貌的变化情况,使用户可直观的感受到容貌的变化情况,从而提醒用户及时调整不良生活习惯。The present application provides a facial appearance prediction method and electronic device that can truly simulate the changes in the user's facial appearance based on the user's living habits, so that the user can intuitively feel the changes in the facial appearance, thereby reminding the user to adjust bad living habits in time.
为达到上述目的,本申请采用如下技术方案:In order to achieve the above objectives, this application adopts the following technical solutions:
第一方面提供一种容貌预测方法,包括:电子设备获取第一图像,第一图像中包括用户的第一面部图像,当然,第一图像中还可是包含第一面部图像的半身像或全身像等;并且,电子设备可获取用户在预设时间内的健康数据,例如,该健康数据可包括用户的运动数据、睡眠数据、营养摄入数据或使用电子设备的时长数据中的至少一种;这样,基于上述健康数据和第一面部图像,电子设备可对用户容貌进行预测,得到预测出的第二图像,第二图像中可包括用户的第二面部图像;进而,电子设备可显示第一界面,第一界面中包括预测出的第二面部图像,当然,第一界面中还可以包括预测之前用户的第一面部图像。A first aspect provides a face prediction method, including: an electronic device acquires a first image, the first image includes a first facial image of a user, of course, the first image may also include a bust or a bust containing the first facial image. Full-length images, etc.; and, the electronic device may obtain the user's health data within a preset time. For example, the health data may include at least one of the user's exercise data, sleep data, nutritional intake data, or time data of using the electronic device In this way, based on the aforementioned health data and the first facial image, the electronic device can predict the appearance of the user to obtain the predicted second image, which can include the user's second facial image; further, the electronic device can The first interface is displayed, and the first interface includes the predicted second facial image. Of course, the first interface may also include the first facial image of the user before the prediction.
可以看出,电子设备为用户预测出的容貌变化可以随着用户的健康数据改变,即电子设备为用户预测出的容貌变化情况与用户的实际生活习惯息息相关。这样,电子设备为用户预测用户容貌时不仅考虑了时间推移的因素,还结合了用户的健康数据,使得对用户容貌的预测结果更为准确,使用户可直观的感受到容貌的变化情况,从而提醒用户及时调整不良生活习惯或者保持良好的生活习惯。It can be seen that the appearance change predicted by the electronic device for the user can change with the user's health data, that is, the appearance change predicted by the electronic device for the user is closely related to the user's actual living habits. In this way, the electronic device not only considers time-lapse factors when predicting the user’s appearance for the user, but also combines the user’s health data to make the prediction result of the user’s appearance more accurate, so that the user can intuitively feel the change in the appearance, thereby Remind users to timely adjust bad living habits or maintain good living habits.
在一种可能的实现方式中,在电子设备获取上述第一图像之前,还包括:电子设备显示预测应用的第二界面,第二界面中包括容貌预测功能的按钮;此时,电子设备获取第一图像,具体包括:响应于用户点击上述按钮的操作,电子设备可使用摄像头获取上述第一图像;或者,响应于用户点击上述按钮的操作,电子设备可从相册应用中获取一张照片作为上述第一图像。In a possible implementation manner, before the electronic device acquires the above-mentioned first image, it further includes: the electronic device displays a second interface of the prediction application, and the second interface includes a button for the facial prediction function; at this time, the electronic device acquires the first image. An image specifically includes: in response to the user clicking the button, the electronic device can use a camera to obtain the first image; or, in response to the user clicking the button, the electronic device can obtain a photo from the album application as the above The first image.
或者,在电子设备获取上述第一图像之前,还包括:电子设备可确定上述健康数据满足 预设条件,例如,该预设条件了包括健康数据大于预设值,或者,健康数据小于预设值。当上述健康数据小于预设值,说明用户存在一些不良的生活习惯,此时,电子设备可自动获取用户的第一面部图像,并结合用户的健康数据对用户进行容貌预测,从而提醒用户及时调整不良生活习惯。当上述健康数据大于预设值,说明用户有良好的生活习惯,此时,电子设备可自动获取用户的第一面部图像,并结合用户的健康数据对用户进行容貌预测,从而提醒用户保持良好的生活习惯。Alternatively, before the electronic device acquires the first image, the method further includes: the electronic device may determine that the health data meets a preset condition, for example, the preset condition includes that the health data is greater than a preset value, or the health data is less than a preset value . When the above health data is less than the preset value, it indicates that the user has some bad lifestyle habits. At this time, the electronic device can automatically obtain the user's first facial image, and combine the user's health data to predict the user's appearance, thereby reminding the user in time Adjust bad habits. When the above health data is greater than the preset value, it indicates that the user has good living habits. At this time, the electronic device can automatically obtain the user's first facial image, and combine the user's health data to predict the user's appearance, thereby reminding the user to maintain good health Life habits.
在一种可能的实现方式中,在电子设备基于上述健康数据和第一面部图像对用户容貌进行预测,得到第二图像之后,还包括:电子设备显示容貌预测完成的通知消息;其中,电子设备显示第一界面,包括:响应于用户打开该通知消息的操作,电子设备打开预测应用并显示预测应用的第一界面。In a possible implementation manner, after the electronic device predicts the user's appearance based on the above-mentioned health data and the first facial image, and obtains the second image, the method further includes: the electronic device displays a notification message that the appearance prediction is completed; The device displaying the first interface includes: in response to the user's operation of opening the notification message, the electronic device opens the prediction application and displays the first interface of the prediction application.
在一种可能的实现方式中,电子设备基于上述健康数据和第一面部图像对该用户容貌进行预测,得到第二图像,包括:电子设备根据上述健康数据确定对应的年龄影响值K,该年龄影响值是指与用户当前年龄呈正相关或负相关的偏差值;一般,当用户的健康数据反映出的生活习惯越健康时,年龄影响值K为负值,当用户的健康数据反映出的生活习惯越不健康时,年龄影响值K为正值;进而,电子设备可基于第一面部图像预测M+K年后该用户的第二面部图像,得到包含第二面部图像的第二图像,其中,M为默认值或用户设置的值。这样,预测出的第二面部图像既可以反映出随年龄增长用户面部的变化,同时还可以反映出用户当前的生活习惯对用户容貌的影响,从而提供容貌预测的准确度以及用户的使用体验。In a possible implementation manner, the electronic device predicts the user's appearance based on the aforementioned health data and the first facial image to obtain the second image, including: the electronic device determines the corresponding age influence value K according to the aforementioned health data, and The age influence value refers to the deviation value that is positively or negatively correlated with the user’s current age; generally, when the user’s health data reflects the healthier living habits, the age influence value K is a negative value. When the user’s health data reflects When the living habits are more unhealthy, the age influence value K is positive; further, the electronic device can predict the second facial image of the user in M+K years based on the first facial image, and obtain a second image containing the second facial image. Among them, M is the default value or the value set by the user. In this way, the predicted second facial image can not only reflect the change of the user's face with age, but also reflect the influence of the user's current living habits on the user's appearance, thereby providing the accuracy of appearance prediction and the user's experience.
在一种可能的实现方式中,电子设备基于上述健康数据和第一面部图像对用户容貌进行预测,得到第二图像,包括:电子设备基于上述第一面部图像预测M年后用户的第三面部图像,其中,M为默认值或用户设置的值;进而,电子设备可根据上述健康数据在第三面部图像上添加对应的容貌效果,得到M年后该用户的第二面部图像。例如,当上述健康数据反映出用户的不良生活习惯时,可添加对应的添加对应的;当上述健康数据反映出用户健康的生活习惯时,可添加对应的年轻效果。In a possible implementation manner, the electronic device predicts the user's appearance based on the aforementioned health data and the first facial image to obtain the second image, including: the electronic device predicts the user's first facial image in M years based on the aforementioned first facial image Three facial images, where M is a default value or a value set by the user; further, the electronic device can add a corresponding facial effect to the third facial image according to the above-mentioned health data to obtain the second facial image of the user after M years. For example, when the above-mentioned health data reflects the user's bad living habits, the corresponding one can be added; when the above-mentioned health data reflects the user's healthy living habits, the corresponding youthful effect can be added.
在一种可能的实现方式中,上述第一界面中包括第二面部图像和第一切换按钮;其中,在电子设备显示第一界面之后,还包括:响应于用户点击第一切换按钮的操作,电子设备将显示的第二面部图像切换为第一面部图像。In a possible implementation manner, the above-mentioned first interface includes a second facial image and a first switch button; wherein, after the electronic device displays the first interface, it further includes: in response to an operation of the user clicking the first switch button, The electronic device switches the displayed second facial image to the first facial image.
在一种可能的实现方式中,上述第一界面中包括第二面部图像和第二切换按钮;其中,在电子设备显示第一界面之后,还包括:响应于用户点击第二切换按钮的操作,电子设备将显示的第二图像切换为标准面部图像,该标准面部图像与预设的标准健康数据对应。这样,用户通过对比这两幅图像可以直观的获知当前的生活习惯与健康的生活习惯对日后容貌造成的影响,从而提醒并督促用户建立更加健康的生活习惯。In a possible implementation manner, the above-mentioned first interface includes a second facial image and a second switch button; wherein, after the electronic device displays the first interface, it further includes: in response to an operation of the user clicking the second switch button, The electronic device switches the displayed second image to a standard facial image, the standard facial image corresponding to preset standard health data. In this way, the user can intuitively know the impact of the current lifestyle and healthy lifestyle on the future appearance by comparing the two images, thereby reminding and urging the user to establish a healthier lifestyle.
在一种可能的实现方式中,电子设备显示第一界面,具体包括:电子设备在第一界面的第二面部图像中标记与上述健康数据对应的容貌变化。这样,用户可以直观的了解到当前的不良生活习惯会具体对容貌造成怎样的影响,从而提醒用户及时调整不良生活习惯。In a possible implementation manner, displaying the first interface by the electronic device specifically includes: the electronic device marks the appearance change corresponding to the above-mentioned health data in the second facial image on the first interface. In this way, the user can intuitively understand how the current bad life habits will specifically affect the appearance, so as to remind the user to adjust the bad life habits in time.
在一种可能的实现方式中,上述第一界面中还可以包括调整用户生活习惯的方法或建议,从而帮助、指导用户尽快调整不良生活习惯。In a possible implementation manner, the above-mentioned first interface may also include methods or suggestions for adjusting the user's living habits, so as to help and guide the user to adjust bad living habits as soon as possible.
在一种可能的实现方式中,上述第一界面中还可以包含老化进度条和滑块;第二面部图像包括用户面部的第一预测图像和第二预测图像;其中,电子设备在第一界面中显示第二面部图像,包括:若检测到滑块被拖动至老化进度条的第一位置,则电子设备可显示与第一位置对应的第一预测图像,第一预测图像为预测出的经第一时间段后用户的面部图像;若检测 到滑块被拖动至老化进度条的第二位置,则电子设备可显示与第二位置对应的第二预测图像,第二预测图像为预测出的经第二时间段后用户的面部图像。也就是说,电子设备可按照时间顺序显示在不同时间下、基于当前用户的生活习惯为用户预测出的面部图像,使用户可以动态的感受到保持当前的生活习惯时面部容貌随时间的变化情况。In a possible implementation, the above-mentioned first interface may also include an aging progress bar and a slider; the second facial image includes a first predicted image and a second predicted image of the user's face; wherein, the electronic device is in the first interface Displaying the second facial image in, includes: if it is detected that the slider is dragged to the first position of the aging progress bar, the electronic device may display the first predicted image corresponding to the first position, and the first predicted image is the predicted The user’s facial image after the first time period; if it is detected that the slider is dragged to the second position of the aging progress bar, the electronic device can display a second predicted image corresponding to the second position, and the second predicted image is a predicted The facial image of the user after the second period of time. In other words, the electronic device can display facial images predicted for the user based on the current user’s living habits at different times in chronological order, so that the user can dynamically feel the changes in facial appearance over time while maintaining the current living habits .
在一种可能的实现方式中,上述第二图像中还可以包括为用户预测出的一段时间后该用户的体形模板,该体形模板与其健康数据对应,该体形模板可包括变胖模板或变瘦模板。这样,用户可以直观、生动的了解自己的生活习惯对日后体形造成的影响,从而提醒用户及时调整不良生活习惯。In a possible implementation manner, the above-mentioned second image may also include a body shape template of the user predicted for the user after a period of time, the body shape template corresponding to its health data, and the body shape template may include a fattening template or a thinning template. In this way, the user can intuitively and vividly understand the influence of his own life habits on his body shape in the future, thereby reminding the user to adjust bad life habits in time.
在一种可能的实现方式中,电子设备获取用户在预设时间内的健康数据,包括:电子设备从可穿戴设备中获取用户在预设时间内的健康数据。In a possible implementation, the electronic device acquiring the user's health data within a preset time includes: the electronic device acquires the user's health data within the preset time from the wearable device.
第二方面提供一种容貌预测方法,包括:电子设备获取用户在预设时间内的健康数据,该健康数据可包括用户的运动数据、睡眠数据、营养摄入数据或使用电子设备的时长数据中的至少一种;并且,电子设备可获取第一图像,第一图像中包括该用户的第一面部图像,例如,第一图像可以是电子设备使用摄像头获取的,或者,第一图像可以是电子设备从相册应用中获取的一张照片;若上述健康数据不满足预设条件,说明用户的健康数据中存在不良的生活习惯,则电子设备可在一个界面中显示第一面部图像和为用户预测出的一段时间后的第二面部图像,第二面部图像为第一面部图像老化后的预测图像;相应的,若上述健康数据满足该预设条件,说明用户的生活习惯较为健康,则电子设备可在一个界面中显示第一面部图像和为用户预测出的一段时间后的第三面部图像,第三面部图像为第一面部图像去老化后的预测图像。A second aspect provides a facial appearance prediction method, including: an electronic device acquires health data of a user within a preset time. The health data may include the user's exercise data, sleep data, nutritional intake data, or data on the duration of using the electronic device At least one of; and the electronic device may obtain a first image, and the first image includes a first facial image of the user. For example, the first image may be obtained by the electronic device using a camera, or the first image may be The electronic device obtains a photo from the photo album application; if the above health data does not meet the preset conditions, indicating that the user’s health data has bad lifestyle habits, the electronic device can display the first facial image and The second facial image predicted by the user after a period of time. The second facial image is the predicted image after the first facial image is aged; accordingly, if the above-mentioned health data meets the preset condition, it indicates that the user’s living habits are relatively healthy. Then, the electronic device may display the first facial image and the third facial image predicted for the user after a period of time in one interface, and the third facial image is a predicted image after the first facial image is de-aging.
在一种可能的实现方式中,若上述健康数据不满足预设条件,则上述界面中还可以包括承载第一面部图像的第一体形模板以及承载第二面部图像的第二体形模板,第二体型模板是第一体型模板变胖的结果;若上述健康数据满足预设条件,则第二界面中还可以包括承载第一面部图像的第一体形模板以及承载该第三面部图像的第三体形模板,第三体型模板是第一体型模板变瘦的结果。这样,用户可以直观、生动的了解自己的生活习惯对日后体形造成的影响,从而提醒用户及时调整不良生活习惯。In a possible implementation, if the aforementioned health data does not meet the preset condition, the aforementioned interface may further include a first figure template carrying a first facial image and a second figure template carrying a second facial image. The second body shape template is the result of the first body shape template becoming fat; if the above-mentioned health data meets the preset conditions, the second interface may also include a first body shape template carrying the first facial image and the third facial image The third body shape template is the result of the first body shape template becoming thinner. In this way, the user can intuitively and vividly understand the influence of his own life habits on his body shape in the future, thereby reminding the user to adjust bad life habits in time.
在一种可能的实现方式中,上述一段时间为M年,M为默认值或用户设置的值;当上述健康数据不满足预设条件时,还包括:电子设备基于上述健康数据和M年,对第一面部图像进行老化处理,得到M年后该用户的第二面部图像;相应的,当上述健康数据满足预设条件时,还包括:电子设备基于上述健康数据和M年,对第一面部图像进行去老化处理,得到M年后该用户的该第三面部图像。In a possible implementation manner, the foregoing period of time is M years, and M is the default value or a value set by the user; when the foregoing health data does not meet the preset conditions, it further includes: the electronic device is based on the foregoing health data and M years, Perform aging processing on the first facial image to obtain the second facial image of the user after M years; correspondingly, when the above-mentioned health data meets the preset conditions, it also includes: the electronic device performs the correction of the first facial image based on the above-mentioned health data and M years A facial image is subjected to aging processing to obtain the third facial image of the user after M years.
在一种可能的实现方式中,若上述健康数据不满足预设条件,则电子设备还可以显示容貌预测完成的通知消息。如果检测到用户打开该通知消息,则电子设备可打开预测应用并显示上述第二面部图像。In a possible implementation manner, if the aforementioned health data does not meet the preset condition, the electronic device may also display a notification message that the appearance prediction is completed. If it is detected that the user has opened the notification message, the electronic device may open the prediction application and display the aforementioned second facial image.
在一种可能的实现方式中,电子设备在显示上述第二面部图像时,还可以在第二面部图像中标记与上述健康数据对应的容貌变化。这样,用户可以直观的了解到当前的不良生活习惯会具体对容貌造成怎样的影响,从而提醒用户及时调整不良生活习惯。In a possible implementation manner, when the electronic device displays the above-mentioned second facial image, it may also mark changes in appearance corresponding to the above-mentioned health data in the second facial image. In this way, the user can intuitively understand how the current bad life habits will specifically affect the appearance, so as to remind the user to adjust the bad life habits in time.
在一种可能的实现方式中,电子设备在显示上述第二面部图像时,还可以显示调整用户生活习惯的方法或建议,从而帮助、指导用户尽快调整不良生活习惯。In a possible implementation manner, when the electronic device displays the above-mentioned second facial image, it may also display methods or suggestions for adjusting the user's living habits, so as to help and guide the user to adjust bad living habits as soon as possible.
第三方面提供一种电子设备,包括:触摸屏、一个或多个处理器、一个或多个存储器、以及一个或多个计算机程序;其中,处理器与触摸屏和存储器均耦合,上述一个或多个计算 机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述任一项所述的容貌预测方法。A third aspect provides an electronic device, including: a touch screen, one or more processors, one or more memories, and one or more computer programs; wherein the processor is coupled with the touch screen and the memory, and one or more of the foregoing The computer program is stored in the memory, and when the electronic device is running, the processor executes one or more computer programs stored in the memory, so that the electronic device executes any one of the aforementioned facial prediction methods.
第四方面提供一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行上述任一项所述的容貌预测方法。A fourth aspect provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute any one of the aforementioned facial prediction methods.
第五方面提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述任一项所述的容貌预测方法。A fifth aspect provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute any one of the aforementioned facial prediction methods.
第六方面提供一种图形用户界面(graphical user interface,GUI),该图形用户界面存储在上述电子设备中,所述电子设备包括触摸屏、存储器、处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面可以包括:显示在触摸屏上的第一GUI,第一GUI中包括容貌预测功能的按钮;响应于针对该按钮的触摸事件,在触摸屏上显示第二GUI,第二GUI中包括用户的第一面部图像和第二面部图像,或者,第二GUI中包括用户的第二面部图像;其中,第一面部图像为用户真实的面部图像,第二面部图像为电子设备基于用户的健康数据和第一面部图像为用户预测出一段时间后的面部图像。A sixth aspect provides a graphical user interface (GUI), which is stored in the above-mentioned electronic device. The electronic device includes a touch screen, a memory, and a processor, and the processor is configured to execute the One or more computer programs in the memory, the graphical user interface may include: a first GUI displayed on the touch screen, and the first GUI includes a button with a facial appearance prediction function; in response to a touch event directed to the button, on the touch screen Display the second GUI, the second GUI includes the user's first facial image and the second facial image, or the second GUI includes the user's second facial image; where the first facial image is the user's real facial image The second facial image is a facial image that the electronic device predicts for the user after a period of time based on the user's health data and the first facial image.
第七方面提供一种GUI,该GUI存储在上述电子设备中,所述电子设备包括触摸屏、存储器、处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面可以包括:显示在触摸屏上的第一GUI,第一GUI中包括容貌预测完成的通知消息;响应于针对上述通知消息的触摸事件,在触摸屏上显示第二GUI,第二GUI中包括用户的第一面部图像和第二面部图像,或者,第二GUI中包括用户的第二面部图像;其中,第一面部图像为用户真实的面部图像,第二面部图像为电子设备基于用户的健康数据和第一面部图像为用户预测出一段时间后的面部图像。A seventh aspect provides a GUI, the GUI is stored in the above electronic device, the electronic device includes a touch screen, a memory, and a processor, and the processor is configured to execute one or more computer programs stored in the memory. The graphical user interface may include: a first GUI displayed on the touch screen, the first GUI including a notification message that the appearance prediction is completed; in response to a touch event for the above notification message, a second GUI is displayed on the touch screen, and the second GUI Including the user’s first facial image and the second facial image, or, the second GUI includes the user’s second facial image; where the first facial image is the real facial image of the user, and the second facial image is based on the electronic device The user's health data and the first facial image are the facial images predicted by the user after a period of time.
可以理解地,上述提供的第三方面所述的电子设备、第四方面所述的计算机存储介质、第五方面所述的计算机程序产品以及第六方面和第七方面所述的GUI均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Understandably, the electronic equipment described in the third aspect, the computer storage medium described in the fourth aspect, the computer program product described in the fifth aspect, and the GUI described in the sixth and seventh aspects provided above are all used for The corresponding method provided above is executed. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method provided above, which will not be repeated here.
附图说明Description of the drawings
图1为本申请实施例提供的一种电子设备的结构示意图一;FIG. 1 is a first structural diagram of an electronic device according to an embodiment of the application;
图2为本申请实施例提供的一种摄像头的拍摄原理示意图;2 is a schematic diagram of a photographing principle of a camera provided by an embodiment of the application;
图3为本申请实施例提供的一种容貌预测方法的应用场景示意图一;FIG. 3 is a schematic diagram 1 of an application scenario of a face prediction method provided by an embodiment of this application;
图4为本申请实施例提供的一种容貌预测方法的流程示意图一;FIG. 4 is a first schematic flowchart of a face prediction method provided by an embodiment of the application;
图5为本申请实施例提供的一种容貌预测方法的应用场景示意图二;FIG. 5 is a second schematic diagram of an application scenario of a face prediction method provided by an embodiment of the application;
图6为本申请实施例提供的一种容貌预测方法的应用场景示意图三;FIG. 6 is a schematic diagram 3 of an application scenario of a face prediction method provided by an embodiment of this application;
图7为本申请实施例提供的一种容貌预测方法的应用场景示意图四;FIG. 7 is a fourth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图8为本申请实施例提供的一种容貌预测方法的应用场景示意图五;FIG. 8 is a schematic diagram 5 of an application scenario of a face prediction method provided by an embodiment of this application;
图9为本申请实施例提供的一种容貌预测方法的应用场景示意图六;FIG. 9 is a sixth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图10为本申请实施例提供的一种容貌预测方法的应用场景示意图七;10 is a schematic diagram 7 of an application scenario of a face prediction method provided by an embodiment of this application;
图11为本申请实施例提供的一种容貌预测方法的应用场景示意图八;FIG. 11 is a schematic diagram eight of an application scenario of a face prediction method provided by an embodiment of this application;
图12为本申请实施例提供的一种容貌预测方法的应用场景示意图九;FIG. 12 is a schematic diagram 9 of an application scenario of a face prediction method provided by an embodiment of this application;
图13A为本申请实施例提供的一种容貌预测方法的应用场景示意图十;FIG. 13A is a tenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图13B为本申请实施例提供的一种容貌预测方法的应用场景示意图十一;13B is a schematic diagram eleventh of an application scenario of a face prediction method provided by an embodiment of this application;
图14为本申请实施例提供的一种容貌预测方法的应用场景示意图十二;FIG. 14 is a twelfth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图15为本申请实施例提供的一种容貌预测方法的应用场景示意图十三;15 is a thirteenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图16为本申请实施例提供的一种容貌预测方法的应用场景示意图十四;16 is a fourteenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图17为本申请实施例提供的一种容貌预测方法的应用场景示意图十五;FIG. 17 is a schematic diagram 15 of an application scenario of a face prediction method provided by an embodiment of this application;
图18为本申请实施例提供的一种容貌预测方法的应用场景示意图十六;18 is a sixteenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图19为本申请实施例提供的一种容貌预测方法的应用场景示意图十七;FIG. 19 is a seventeenth schematic diagram of an application scenario of a face prediction method provided by an embodiment of this application;
图20为本申请实施例提供的一种容貌预测方法的应用场景示意图十八;20 is a schematic diagram eighteenth of an application scenario of a face prediction method provided by an embodiment of this application;
图21为本申请实施例提供的一种容貌预测方法的流程示意图二;FIG. 21 is a second schematic flowchart of a facial appearance prediction method provided by an embodiment of this application;
图22为本申请实施例提供的一种电子设备的结构示意图二。FIG. 22 is a second structural diagram of an electronic device provided by an embodiment of this application.
具体实施方式detailed description
下面将结合附图对本实施例的实施方式进行详细描述。The implementation of this embodiment will be described in detail below with reference to the accompanying drawings.
示例性的,本申请实施例提供的一种容貌预测方法可应用于手机、平板电脑、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、手持计算机、上网本、个人数字助理(personal digital assistant,PDA)、可穿戴电子设备、虚拟现实设备等电子设备,本申请实施例对此不做任何限制。Exemplarily, the appearance prediction method provided by the embodiments of the present application can be applied to mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, and personal digital assistants (personal digital assistants). Digital assistant (PDA), wearable electronic devices, virtual reality devices, and other electronic devices, the embodiments of the present application do not make any restrictions on this.
以手机100为上述电子设备举例,图1示出了手机100的结构示意图。Taking the mobile phone 100 as an example of the above electronic device, FIG. 1 shows a schematic structural diagram of the mobile phone 100.
手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,射频模块150,通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。The mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, RF module 150, communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display 194, and user identification module (subscriber identification module, SIM) card interface 195, etc.
可以理解的是,本申请实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It is understandable that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Among them, the different processing units may be independent devices or integrated in one or more processors.
其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller may be the nerve center and command center of the mobile phone 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 to store instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, the processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface. receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串 行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现手机100的触摸功能。The I2C interface is a two-way synchronous serial bus, which includes a serial data line (SDA) and a derail clock line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces. For example, the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the mobile phone 100.
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, the processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit audio signals to the communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与通信模块160。例如:处理器110通过UART接口与通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, the UART interface is generally used to connect the processor 110 and the communication module 160. For example, the processor 110 communicates with the Bluetooth module in the communication module 160 through the UART interface to realize the Bluetooth function. In some embodiments, the audio module 170 may transmit audio signals to the communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现手机100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现手机100的显示功能。The MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices. The MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the mobile phone 100. The processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the mobile phone 100.
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured through software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the communication module 160, the audio module 170, the sensor module 180, and so on. GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为手机100充电,也可以用于手机100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on. The USB interface 130 can be used to connect a charger to charge the mobile phone 100, and can also be used to transfer data between the mobile phone 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对手机100的结构限定。在本申请另一些实施例中,手机100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过手机100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is used to receive charging input from the charger. Among them, the charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive the charging input of the wired charger through the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive the wireless charging input through the wireless charging coil of the mobile phone 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管 理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the communication module 160. The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
手机100的无线通信功能可以通过天线1,天线2,射频模块150,通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the mobile phone 100 can be implemented by the antenna 1, the antenna 2, the radio frequency module 150, the communication module 160, the modem processor, and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。The antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna can be used in combination with a tuning switch.
射频模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。射频模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。射频模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。射频模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,射频模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,射频模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The radio frequency module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the mobile phone 100. The radio frequency module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc. The radio frequency module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation. The radio frequency module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1. In some embodiments, at least part of the functional modules of the radio frequency module 150 may be provided in the processor 110. In some embodiments, at least part of the functional modules of the radio frequency module 150 and at least part of the modules of the processor 110 may be provided in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与射频模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194. In some embodiments, the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the radio frequency module 150 or other functional modules.
通信模块160可以提供应用在手机100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。通信模块160可以是集成至少一个通信处理模块的一个或多个器件。通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems ( Global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions. The communication module 160 may be one or more devices integrating at least one communication processing module. The communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110. The communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna 2.
在一些实施例中,手机100的天线1和射频模块150耦合,天线2和通信模块160耦合,使得手机100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the mobile phone 100 is coupled with the radio frequency module 150, and the antenna 2 is coupled with the communication module 160, so that the mobile phone 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
手机100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。 处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The mobile phone 100 implements a display function through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos, etc. The display screen 194 includes a display panel. The display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode). AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc. In some embodiments, the mobile phone 100 may include one or N display screens 194, and N is a positive integer greater than one.
手机100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The mobile phone 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye. ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
摄像头193用于捕获静态图像或视频。在一些实施例中,手机100可以包括1个或N个摄像头,N为大于1的正整数。摄像头193可以是前置摄像头也可以是后置摄像头。如图2所示,摄像头193一般包括镜头(lens)和感光元件(sensor),该感光元件可以为CCD(charge-coupled device,电荷耦合元件)或者CMOS(complementary metal oxide semiconductor,互补金属氧化物半导体)等任意感光器件。The camera 193 is used to capture still images or videos. In some embodiments, the mobile phone 100 may include 1 or N cameras, and N is a positive integer greater than 1. The camera 193 may be a front camera or a rear camera. As shown in FIG. 2, the camera 193 generally includes a lens and a sensor. The photosensitive element may be a CCD (charge-coupled device) or a CMOS (complementary metal oxide semiconductor, complementary metal oxide semiconductor). ) And other arbitrary photosensitive devices.
仍如图2所示,在拍摄过程中,被拍摄物体的反射光线经过镜头后可生成光学图像,该光学图像投射到感光元件上,感光元件将接收到的光信号转换为电信号,进而,摄像头193将得到的电信号发送至DSP(Digital Signal Processing,数字信号处理)模块进行数字信号处理,最终得到数字图像。该数字图像可通过显示屏194在手机100上输出,也可以将该数字图像存储在内部存储器121(或外部存储器120)中。As shown in FIG. 2, during the shooting process, the reflected light of the object being photographed can generate an optical image after passing through the lens. The optical image is projected onto the photosensitive element, and the photosensitive element converts the received light signal into an electrical signal, and further, The camera 193 sends the obtained electrical signal to a DSP (Digital Signal Processing) module for digital signal processing, and finally a digital image is obtained. The digital image can be output on the mobile phone 100 through the display screen 194, or the digital image can be stored in the internal memory 121 (or the external memory 120).
视频编解码器用于对数字视频压缩或解压缩。手机100可以支持一种或多种视频编解码器。这样,手机100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The mobile phone 100 may support one or more video codecs. In this way, the mobile phone 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现手机100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, for example, the transfer mode between human brain neurons, it can quickly process input information and can continuously learn by itself. Through the NPU, applications such as intelligent cognition of the mobile phone 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone 100. The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。The internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions. The processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function. The data storage area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone 100. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal. The audio module 170 can also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。手机100可以通过扬声器170A收听音乐,或收听免提通话。The speaker 170A, also called a "speaker", is used to convert audio electrical signals into sound signals. The mobile phone 100 can listen to music through the speaker 170A, or listen to a hands-free call.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当手机100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。The receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the mobile phone 100 answers a call or a voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。手机100可以设置至少一个麦克风170C。在另一些实施例中,手机100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,手机100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone", "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C. The mobile phone 100 may be provided with at least one microphone 170C. In other embodiments, the mobile phone 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the mobile phone 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used to connect wired earphones. The earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an ambient light sensor 180L, bone conduction sensor 180M, etc.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。手机100可以接收按键输入,产生与手机100的用户设置以及功能控制有关的键信号输入。The button 190 includes a power button, a volume button, and so on. The button 190 may be a mechanical button. It can also be a touch button. The mobile phone 100 can receive key input, and generate key signal input related to user settings and function control of the mobile phone 100.
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 can generate vibration prompts. The motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as photographing, audio playback, etc.) can correspond to different vibration feedback effects. Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects. Different application scenarios (for example: time reminding, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。手机100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。手机100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,手机100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在手机100中,不能和手机100分离。The SIM card interface 195 is used to connect to the SIM card. The SIM card can be connected to and separated from the mobile phone 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195. The mobile phone 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc. The same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different. The SIM card interface 195 can also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The mobile phone 100 interacts with the network through the SIM card to implement functions such as call and data communication. In some embodiments, the mobile phone 100 uses an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the mobile phone 100 and cannot be separated from the mobile phone 100.
在本申请实施例中,手机100可通过传感器模块180中的一个或多个传感器检测用户的健康数据,该健康数据可反映出用户的生活习惯特征。例如,该健康数据可以包括用户每天的运动数据(例如运动时间、运动量等),睡眠数据(例如入睡时间、睡眠时长等),营养摄入数据(例如摄入的热量、吃饭时间等)以及使用手机的时间、时长等一项或多项数据。手机100可将检测到的最近一段时间(例如最近半年、最近一个月、最近一周、最近一天或最 近若干小时)的健康数据存储在手机100的外部存储器120或内部存储器121中。In the embodiment of the present application, the mobile phone 100 can detect the user's health data through one or more sensors in the sensor module 180, and the health data can reflect the user's life habit characteristics. For example, the health data may include the user’s daily exercise data (such as exercise time, exercise volume, etc.), sleep data (such as time to fall asleep, sleep duration, etc.), nutritional intake data (such as calories intake, meal time, etc.), and usage One or more items of data such as the time and duration of the phone. The mobile phone 100 may store the detected health data of the most recent period of time (for example, the most recent six months, the most recent month, the most recent week, the most recent day, or the most recent several hours) in the external memory 120 or the internal memory 121 of the mobile phone 100.
或者,如图3所示,手机100还可以通过通信模块160与可穿戴设备200交互。其中,可穿戴设备200可以是智能手表、智能手环、智能眼镜、智能头盔或智能耳机等设备,本申请实施例对此不做任何限制。手机100与可穿戴设备200建立连接后,可穿戴设备200可将采集到的用户的健康数据发送给手机100。当然,用户也可以向手机100中手动输入自己的健康数据,本申请实施例对此不做任何限制。Alternatively, as shown in FIG. 3, the mobile phone 100 may also interact with the wearable device 200 through the communication module 160. Among them, the wearable device 200 may be a smart watch, a smart bracelet, smart glasses, a smart helmet, or a smart headset, and the embodiment of the present application does not impose any limitation on this. After the mobile phone 100 establishes a connection with the wearable device 200, the wearable device 200 can send the collected health data of the user to the mobile phone 100. Of course, the user can also manually input his own health data into the mobile phone 100, and the embodiment of the present application does not impose any restriction on this.
示例性的,可以在手机100中安装具有容貌预测功能的应用(可称为预测APP)。如果检测到用户打开预测APP中的容貌预测功能,则手机100可获取包含用户面部图像的数据,并且,手机100可获取预设时间段内(最近一段时间)用户的健康数据。进而,结合用户最近的健康数据,手机100可基于用户面部图像预测用户在一段时间(例如1年、3年、5年或10年)后容貌发生的变化,从而生成包含该容貌变化结果的图像并将该图像显示给用户。Exemplarily, an application with a facial appearance prediction function (may be called a prediction APP) may be installed in the mobile phone 100. If it is detected that the user has turned on the facial prediction function in the prediction APP, the mobile phone 100 can obtain data containing the user's facial image, and the mobile phone 100 can obtain the user's health data within a preset time period (recent period). Furthermore, in combination with the user's recent health data, the mobile phone 100 can predict the change in the user's appearance after a period of time (for example, 1 year, 3 years, 5 years, or 10 years) based on the user's facial image, thereby generating an image containing the result of the appearance change And display the image to the user.
这样一来,手机100为用户预测出的容貌变化可以随着用户的健康数据改变,也就是说,手机100为用户预测出的容貌变化情况与用户的实际生活习惯息息相关。例如,如果用户的实际生活习惯比较健康,则手机100预测出的用户容貌衰老的速度将相对缓慢;如果用户的实际生活习惯不太健康,则手机100预测出的用户容貌衰老的速度将相对较快。可以看出,手机100为用户预测用户容貌时不仅考虑了时间推移的因素,还结合了用户的健康数据,使得对用户容貌的预测结果更为准确。In this way, the appearance change predicted by the mobile phone 100 for the user can change with the user's health data, that is, the appearance change predicted by the mobile phone 100 for the user is closely related to the user's actual living habits. For example, if the user’s actual life habits are relatively healthy, the mobile phone 100 predicts that the user’s appearance will age relatively slowly; if the user’s actual life habits are not healthy, the user’s appearance predicted by the mobile phone 100 will be relatively slow. fast. It can be seen that the mobile phone 100 not only considers the factors of the passage of time when predicting the user's appearance for the user, but also combines the user's health data, so that the prediction result of the user's appearance is more accurate.
并且,手机100在显示预测出的用户衰老后的图像时,还可以向用户提醒当前有哪些不健康的生活习惯,或者,向用户提醒这些不健康的生活习惯对容貌会有怎样的影响。这样,用户可以更加直观、生动的获知当前的生活习惯对容貌造成的影响,从而通过容貌预测的方式提醒并督促用户建立更加健康的生活习惯。In addition, when the mobile phone 100 displays the predicted image of the user after aging, it can also remind the user of the current unhealthy living habits, or remind the user of how these unhealthy living habits affect the appearance. In this way, the user can more intuitively and vividly learn the impact of the current living habits on the appearance, thereby reminding and urging the user to establish healthier living habits through the way of appearance prediction.
以下将结合附图详细阐述本申请实施例提供的一种容貌预测方法,如图4所示,以手机为电子设备举例,该方法可包括步骤S401-S404。Hereinafter, a face prediction method provided by an embodiment of the present application will be described in detail with reference to the accompanying drawings. As shown in FIG. 4, taking a mobile phone as an example of an electronic device, the method may include steps S401-S404.
S401、手机获取第一图像,第一图像中包含用户的面部图像(也可称为第一面部图像)。S401. The mobile phone acquires a first image, where the first image contains a facial image of the user (also referred to as a first facial image).
示例性的,手机中可安装具有容貌预测功能的APP(后续实施例中称为预测APP),该预测APP可以是拍照类APP、美图类APP、运动类APP或健康类APP等,本申请实施例对此不做任何限制。作为一种可能的实现方式,容貌预测功能还可以作为一个功能选项设置在手机的负一屏或手机的下拉菜单中。当用户在手机的负一屏或手机的下拉菜单点击容貌预测功能,则手机可显示图5所示的交互界面501,此处以用户点击预测APP为例进行说明,但并不构成限制。Exemplarily, an APP with a facial appearance prediction function (referred to as a prediction APP in the subsequent embodiments) may be installed in the mobile phone. The prediction APP may be a photo APP, a Meitu APP, a sports APP, or a health APP. This application The embodiment does not impose any limitation on this. As a possible implementation, the face prediction function can also be set as a function option in the negative screen of the mobile phone or the drop-down menu of the mobile phone. When the user clicks the facial prediction function on the negative screen of the mobile phone or the drop-down menu of the mobile phone, the mobile phone can display the interactive interface 501 shown in FIG. 5. Here, the user clicks the prediction APP as an example for description, but it does not constitute a limitation.
如果检测到用户启动预测APP的操作,如图5所示,手机可启动预测APP并显示预测APP的交互界面501。交互界面501中可设置预测容貌功能的按钮,例如图5中所示的测一测按钮502,该按钮502可用于开启容貌预测的功能。如果检测到用户点击上述按钮502,手机可调用相机APP打开摄像头捕捉当前的拍摄画面。If it is detected that the user starts the prediction APP operation, as shown in FIG. 5, the mobile phone can start the prediction APP and display the interactive interface 501 of the prediction APP. The interactive interface 501 can be provided with a button for predicting facial features, such as the measuring button 502 shown in FIG. 5, and the button 502 can be used to enable the facial predicting function. If it is detected that the user clicks the button 502, the mobile phone can call the camera APP to open the camera to capture the current shooting picture.
如图6所示,手机可将捕捉到的拍摄画面601显示在预览界面602中。并且,手机可提示用户移动手机将面部输入至拍摄画面601中。例如,手机可在预览界面601中通过文字提示用户使用前置摄像头拍摄面部图像。又例如,手机可通过语音的形式提示用户平视后置摄像头,并调整手机与用户之间的距离,使得手机能够在拍摄画面601中捕捉到用户的面部图像,即第一面部图像。As shown in FIG. 6, the mobile phone can display the captured image 601 in the preview interface 602. In addition, the mobile phone can prompt the user to move the mobile phone to input the face into the shooting screen 601. For example, the mobile phone may prompt the user to use the front camera to take a facial image through text in the preview interface 601. For another example, the mobile phone may prompt the user to look up at the rear camera in the form of voice, and adjust the distance between the mobile phone and the user, so that the mobile phone can capture the user's facial image in the shooting screen 601, that is, the first facial image.
手机在捕捉拍摄画面601时可使用预设的人脸检测算法识别拍摄画面601中是否包含满足预设大小的人脸,如果检测到拍摄画面601中包含预设大小的人脸,则手机可自动执行拍 照操作获取当前拍摄画面601中的图像(即第一图像),该第一图像中包含用户的面部图像(即第一面部图像)。当然,用户也可以手动点击预览界面602中的拍照按钮603,响应于用户点击拍照按钮603的操作,手机可将此时获取到的拍摄画面601作为第一图像保存在存储器中。The mobile phone can use the preset face detection algorithm when capturing the shooting screen 601 to identify whether the shooting screen 601 contains a face of a preset size. If it detects that the shooting screen 601 contains a face of a preset size, the phone can automatically Perform a photographing operation to obtain an image (ie, a first image) in the currently photographed screen 601, and the first image contains a facial image of the user (ie, a first facial image). Of course, the user can also manually click the camera button 603 in the preview interface 602. In response to the user's operation of clicking the camera button 603, the mobile phone can save the captured image 601 acquired at this time as the first image in the memory.
在另一些实施例中,手机检测到用户打开上述容貌预测功能后,还可以提示用户从相册中选一张包含用户面部的照片。进而,手机可从用户选择的照片中通过人脸检测算法提取用户的面部图像(即第一面部图像)。当然,上述第一面部图像还是手机从服务器或其他电子设备中获取的,本申请实施例对此不做任何限制。In other embodiments, the mobile phone may also prompt the user to select a photo containing the user's face from the album after detecting that the user has turned on the aforementioned facial prediction function. Furthermore, the mobile phone can extract the user's facial image (that is, the first facial image) from the photo selected by the user through a face detection algorithm. Of course, the above-mentioned first facial image is still obtained by the mobile phone from the server or other electronic devices, and the embodiment of the present application does not impose any limitation on this.
除了获取包含用户面部图像的第一图像外,如图7所示,手机还可以提示用户输入自己当前的年龄。这样,手机后续可基于用户当前的年龄对用户容貌进行预测。又例如,仍如图7所示,手机还可以提示用户选择具体需要预测多长时间后的容貌变化,例如,用户可以选择预测1年后、5年后或10后自己的容貌。当然,手机也可基于获取到的第一图像自动识别用户的年龄,或者,手机可基于获取到的第一图像默认为用户预测一定时间(例如5年)后的容貌,本申请实施例对此不做任何限制。In addition to acquiring the first image containing the user's facial image, as shown in Figure 7, the mobile phone can also prompt the user to enter his current age. In this way, the mobile phone can subsequently predict the user's appearance based on the user's current age. For another example, as shown in FIG. 7, the mobile phone may also prompt the user to select how long the appearance change needs to be predicted. For example, the user may choose to predict his appearance in 1 year, 5 years, or 10 years. Of course, the mobile phone can also automatically recognize the age of the user based on the first image acquired, or the mobile phone can predict the user’s appearance after a certain period of time (for example, 5 years) based on the acquired first image. There are no restrictions.
S402、手机获取用户在预设时间内的健康数据。S402. The mobile phone obtains the user's health data within a preset time.
手机检测到用户打开上述容貌预测功能后,还可以获取用户在最近一段时间(例如一周、一个月、三个月、半年或一年)内的健康数据。该健康数据可以反映出用户的生活习惯特征。After the mobile phone detects that the user has turned on the above facial prediction function, it can also obtain the user's health data in the most recent period of time (for example, one week, one month, three months, six months, or one year). The health data can reflect the characteristics of the user's living habits.
例如,手机可记录每天用户作息的时间、睡眠质量、吃饭的时间、摄入的热量值、运动的时间以及运动量等健康数据。或者,手机还可以从用户的可穿戴设备中获取可穿戴设备检测到的用户的一项或多项健康数据。或者,手机还可以记录用户手动输入的各种健康数据。那么,检测到用户打开上述容貌预测功能后,手机可获取预设时间段内手机记录的各项健康数据。例如,手机可根据获取到的n项健康数据生成一个n(n>0)维的矩阵,矩阵中的每一维向量均与一项健康数据对应。如图8中的(a)所示,矩阵A中包含手机获取到的用户在最近一个月(30天)中每天的健康数据。该健康数据中包括用户每天的睡眠时间、步行步数、卡路里摄入量以及手机使用时间这4项健康数据。矩阵A中的每一个行向量对应一项健康数据。For example, a mobile phone can record health data such as the user's daily work and rest time, sleep quality, meal time, calorie intake, exercise time, and exercise volume. Alternatively, the mobile phone may also obtain one or more items of health data of the user detected by the wearable device from the wearable device of the user. Alternatively, the mobile phone can also record various health data manually entered by the user. Then, after detecting that the user has turned on the aforementioned facial prediction function, the mobile phone can obtain various health data recorded by the mobile phone within a preset time period. For example, the mobile phone can generate an n (n>0)-dimensional matrix according to the acquired n items of health data, and each dimensional vector in the matrix corresponds to one item of health data. As shown in (a) in Figure 8, matrix A contains the user's daily health data acquired by the mobile phone in the last month (30 days). The health data includes the user's daily sleep time, walking steps, calorie intake, and mobile phone usage time. Each row vector in matrix A corresponds to a piece of health data.
需要说明的是,本申请实施例对上述步骤S401与S402之间的执行顺序不做任何限定。例如,手机可以在获取到用户的第一图像后获取最近一个月内用户的健康数据。又例如,手机也可以实时监测最近一个月内用户的健康数据,如果检测到用户的健康数据出现不健康的行为习惯,例如,如果检测到用户连续一周的睡眠时间不足5小时,则手机可自动打开上述容貌预测功能,并从相册中获取最近一次拍摄的包含用户面部图像的照片,进而通过执行下述步骤S403-S404向用户提醒不健康的行为习惯对容貌造成的影响。It should be noted that the embodiment of the present application does not make any limitation on the execution sequence between the foregoing steps S401 and S402. For example, the mobile phone can obtain the user's health data in the last month after obtaining the user's first image. For another example, the mobile phone can also monitor the user's health data in the last month in real time. If unhealthy behaviors are detected in the user's health data, for example, if it is detected that the user sleeps for less than 5 hours in a continuous week, the mobile phone can automatically turn on The above-mentioned facial appearance prediction function obtains the most recently taken photo containing the user's facial image from the album, and then performs the following steps S403-S404 to remind the user of the impact of unhealthy behaviors on the appearance.
S403、手机基于上述健康数据和预设年数(例如,5年)对第一图像中的面部图像进行容貌预测。S403. The mobile phone predicts the appearance of the facial image in the first image based on the aforementioned health data and a preset number of years (for example, 5 years).
以手机基于步骤S401中获取到的面部图像预测用户5年后的容貌举例,手机在预测用户5年后的容貌变化时,除了考虑时间因素对用户容貌的影响,还要基于步骤S402获取到的用户的健康数据预测用户5年后的面部的图像(也可称为第二面部图像)。也就是说,手机可基于当前用户的生活习惯,为用户预测当这种生活习惯持续下去时对用户容貌的影响,从而提醒并督促用户建立更加健康的生活习惯。Taking the mobile phone as an example of predicting the user's appearance in 5 years based on the facial image obtained in step S401, when the mobile phone predicts the user's appearance change in 5 years, in addition to considering the influence of time on the user's appearance, it is also based on the information obtained in step S402. The user's health data predicts an image of the user's face in 5 years (also called a second facial image). In other words, the mobile phone can predict for the user the impact on the user's appearance when this lifestyle continues based on the current user's living habits, thereby reminding and urging the user to establish a healthier lifestyle.
以下,将详细阐述本申请实例提供的结合健康数据对用户面部图像进行容貌预测的多种方式。In the following, various methods for predicting the appearance of a user's facial image in combination with health data provided by the examples of this application will be described in detail.
方式一method one
手机可从最近一段时间(例如,一个月)用户的n项健康数据中提取用户的生活习惯特征。示例性的,手机可计算n项健康数据中每一项健康数据的特征值。例如,如图8中的(b)所示,对于睡眠时间这一健康数据,手机可计算最近一个月内用户每日的平均睡眠时间b 1;对于步行步数这一健康数据,手机可计算最近一个月内用户每日的平均步行步数b 2;对于卡路里摄入量这一健康数据,手机可计算最近一个月内用户每日的平均摄入的卡路里量b 3;对于手机使用时间这一健康数据,手机可计算最近一个月内用户每日的平均手机使用时间b 4。当然,除了计算健康数据在一段时间内的平均值外,手机还可以使用其他算法从每一项健康数据中提取对应的特征值。这样,仍如图8中的(b)所示,手机可得到一个n维特征向量B,n维特征向量B中的每个取值都代表了一项健康数据的特征值。整个n维特征向量可反映出用户的生活习惯特征。例如,如果在n维特征向量中用户的睡眠时间晚于12点,则用户具有晚睡的生活习惯特征。 The mobile phone can extract the user's life habit characteristics from n items of health data of the user in the most recent period of time (for example, one month). Exemplarily, the mobile phone can calculate the characteristic value of each item of health data in n items of health data. For example, as shown in Figure 8(b), for the health data of sleep time, the mobile phone can calculate the user’s daily average sleep time b 1 in the last month; for the health data of walking steps, the mobile phone can calculate The user’s daily average number of walking steps b 2 in the last month; for the health data of calorie intake, the mobile phone can calculate the user’s daily average calorie intake in the last month b 3 ; 1. Health data, the mobile phone can calculate the user's daily average mobile phone usage time b 4 in the last month. Of course, in addition to calculating the average value of health data over a period of time, mobile phones can also use other algorithms to extract corresponding feature values from each item of health data. In this way, as shown in Fig. 8 (b), the mobile phone can obtain an n-dimensional feature vector B, and each value in the n-dimensional feature vector B represents a feature value of a piece of health data. The entire n-dimensional feature vector can reflect the characteristics of the user's living habits. For example, if the user's sleep time is later than 12 o'clock in the n-dimensional feature vector, the user has a living habit characteristic of going to bed late.
进而,手机可基于用户的生活习惯特征确定与其生活习惯特征对应的年龄影响值。年龄影响值是指与某一固定年龄的面部特征呈正相关或负相关的偏差值。例如,对于28岁的用户,如果用户的生活习惯特征较为健康,则该用户的面部图像可能呈现的是26岁的面部特征,即此时的年龄影响值为-2岁。相应的,对于28岁的用户,如果用户的生活习惯特征不太健康,则该用户的面部图像可能呈现的是31岁的面部特征,即此时的年龄影响值为3岁。那么,在方式一中,手机可根据确定出的用户的生活习惯特征,确定与该生活习惯特征对应的年龄影响值。Furthermore, the mobile phone can determine the age influence value corresponding to the user's life habit characteristics based on the user's life habit characteristics. Age influence value refers to the deviation value that is positively or negatively correlated with facial features of a certain fixed age. For example, for a 28-year-old user, if the user's living habit characteristics are relatively healthy, the user's facial image may show a 26-year-old facial feature, that is, the age influence value at this time is -2 years old. Correspondingly, for a 28-year-old user, if the user's life habit characteristics are not healthy, the user's facial image may show a 31-year-old facial feature, that is, the age influence value at this time is 3 years old. Then, in mode one, the mobile phone can determine the age influence value corresponding to the life habit characteristic of the user according to the determined life habit characteristic of the user.
示例性的,可预先使用决策树或卷积神经网络(Convolutional Neural Networks,CNN)等算法训练不同的生活习惯特征与对应的年龄影响值之间的对应关系。例如,手机或服务器可采集大量不同年龄、不同生活习惯特征的用户面部图像进行机器学习和训练,从而建立一个不同生活习惯特征-不同年龄影响值的输入输出模型。这样,手机将确定出的用户的生活习惯特征(例如上述n维特征向量)输入至该输入输出模型后,该输入输出模型便可输出对应的年龄影响值。一般,用户的生活习惯越健康,对应的年龄影响值越小,用户的生活习惯越不健康,对应的年龄影响值越大。Exemplarily, algorithms such as decision trees or convolutional neural networks (Convolutional Neural Networks, CNN) may be used in advance to train the correspondence between different living habits features and corresponding age influence values. For example, a mobile phone or a server can collect a large number of facial images of users of different ages and different life habits for machine learning and training, thereby establishing an input and output model of different life habits characteristics-different age influence values. In this way, after the mobile phone inputs the determined characteristics of the user's living habits (for example, the aforementioned n-dimensional feature vector) into the input-output model, the input-output model can output the corresponding age influence value. Generally, the healthier the user's living habits, the smaller the corresponding age influence value, and the less healthy the user's living habits, the larger the corresponding age influence value.
需要说明的是,上述输入输出模型可以设置在手机内部,也可以设置在服务器中。当上述输入输出模型设置在服务器时,手机可将确定出的用户的生活习惯特征发送给服务器,由服务器使用上述输入输出模型确定对应的年龄影响值。或者,手机也可将获取到的健康数据发送给服务器,由服务器提取该健康数据中的生活习惯特征,并确定对应的年龄影响值。又或者,用户的健康数据也可存储在服务器中,手机可向服务器发送预测指令,使得服务器获取用户的健康数据,并按照上述方法确定与用户的生活习惯特征对应的年龄影响值。It should be noted that the above-mentioned input and output model can be set in the mobile phone or in the server. When the input and output model is set on the server, the mobile phone can send the determined characteristics of the user's life habit to the server, and the server uses the input and output model to determine the corresponding age influence value. Alternatively, the mobile phone may also send the acquired health data to the server, and the server extracts the living habits characteristics in the health data, and determines the corresponding age influence value. Alternatively, the user's health data can also be stored in the server, and the mobile phone can send a prediction instruction to the server, so that the server obtains the user's health data, and determines the age influence value corresponding to the user's lifestyle characteristics according to the above method.
仍以手机为用户预测5年后的容貌举例,如图9所示,如果用户当前的年龄为27岁,说明需要为用户预测当用户32岁时的面部图像801。由于用户当前生活习惯特征的影响,如果手机确定出对应的年龄影响值为-2岁,说明当用户32岁时面部实际呈现出的是30岁的面部特征。因此,手机可基于第一图像中的面部图像预测用户30岁时的面部图像802,将预测出的结果作为用户32岁时的面部图像展示给用户。Still taking the mobile phone as an example of predicting the user's appearance in 5 years, as shown in FIG. 9, if the user's current age is 27 years old, it means that the user needs to predict the facial image 801 when the user is 32 years old. Due to the influence of the user's current living habit characteristics, if the mobile phone determines the corresponding age influence value is -2 years old, it means that when the user is 32 years old, the face actually presents the facial features of 30 years old. Therefore, the mobile phone can predict the facial image 802 when the user is 30 years old based on the facial image in the first image, and display the predicted result to the user as the facial image when the user is 32 years old.
相应的,如果手机确定出对应的年龄影响值为3岁,说明当用户32岁时面部实际呈现出的是35岁的面部特征。那么,手机可基于第一图像中的面部图像预测用户35岁时的面部图像,将预测出的结果作为用户32岁时的面部图像展示给用户。Correspondingly, if the mobile phone determines that the corresponding age influence value is 3 years old, it means that when the user is 32 years old, the face actually presents the facial features of 35 years old. Then, the mobile phone can predict the user's face image at the age of 35 based on the face image in the first image, and display the predicted result to the user as the user's face image at the age of 32.
示例性的,手机可向预设的预测模型中输入第一图像中的面部图像,并向该预测模型中输入用户的当前年龄和需要预测的目标年龄。在方式一中,该目标年龄为用户实际需要预测 的年龄与上述年龄影响值叠加后的结果。例如,用户实际需要预测的年龄为30岁,如果手机根据用户的健康数据确定出对应的年龄影响值为2岁,则本次手机需要预测的目标年龄为32岁。相应的,如果手机根据用户的健康数据确定出对应的年龄影响值为-2岁,则本次手机需要预测的目标年龄为28岁。进而,预测模型可基于上述第一图像中用户的面部图像和当前年龄,使用老化处理算法预测用户在目标年龄时的面部图像,得到包含用户面部经老化处理后的第二图像。Exemplarily, the mobile phone may input the facial image in the first image into the preset prediction model, and input the current age of the user and the target age that needs to be predicted into the prediction model. In Method 1, the target age is the result of the superposition of the age that the user actually needs to predict and the above-mentioned age influence value. For example, the actual age that the user needs to predict is 30 years old. If the mobile phone determines the corresponding age influence value according to the user's health data as 2 years old, the target age that the mobile phone needs to predict this time is 32 years old. Correspondingly, if the mobile phone determines that the corresponding age influence value is -2 years old according to the user's health data, the target age that the mobile phone needs to predict this time is 28 years old. Furthermore, the prediction model may use an aging processing algorithm to predict the user's facial image at the target age based on the user's facial image in the first image and the current age, and obtain a second image containing the user's face after aging processing.
示例性的,手机或服务器可创建一个样本池,样本池中包含大量不同年龄的用户的面部图像。进而,手机或服务器可基于不同年龄的用户的面部图像进行深度学习,从而后建立上述预测模型。例如,服务器可基于生成对抗网络(Generative Adversarial Nets,GAN)对不同年龄的用户的面部图像进行训练和学习,建立上述预测模型。Exemplarily, a mobile phone or server can create a sample pool, which contains a large number of facial images of users of different ages. Furthermore, the mobile phone or the server can perform in-depth learning based on facial images of users of different ages, so as to establish the aforementioned prediction model. For example, the server may train and learn facial images of users of different ages based on Generative Adversarial Nets (GAN) to establish the above prediction model.
一般,GAN中包括生成模型(generative model)和判别模型(discriminative model)。如图10所示,可向生成模型中输入用户当前真实的面部图像(例如上述第一图像)和目标年龄标签,生成模型可根据用户当前真实的面部图像和目标年龄标签生成对应目标年龄的面部预测图像,而判别模型可根据当前用户真实的面部图像,目标年龄标签以及生成模型输出的面部预测图像,判别生成模型输出的面部预测图像是真实的图像还是生成的图像。那么,整个GAN通过不断地迭代和训练,当生成模型和判别模型达到平衡时,生成模型可生成足以“以假乱真”的图片,使判别模型无法判断生成模型输出的面部图像的真假。此时,生成模型输出的面部图像即为用户在目标年龄时老化后的面部图像。Generally, GAN includes a generative model (generative model) and a discriminative model (discriminative model). As shown in Figure 10, the user's current real facial image (such as the first image) and target age label can be input into the generation model, and the generation model can generate a face corresponding to the target age according to the user's current real facial image and target age label. Predict the image, and the discrimination model can determine whether the facial prediction image output by the generation model is a real image or a generated image based on the current user's real facial image, target age label, and facial prediction image output by the generation model. Then, through continuous iteration and training of the entire GAN, when the generative model and the discriminant model reach a balance, the generative model can generate images that are "false and true", so that the discriminant model cannot judge the authenticity of the facial image output by the generative model. At this time, the facial image output by the generation model is the facial image that the user has aged at the target age.
可以看出,在方式一中,手机在预测用户M年后的面部图像时,可结合用户的健康数据确定用户的生活习惯对年龄的年龄影响值X。进而,手机实际可为用户预测M+X年后的面部图像,使得预测出的面部图像既可以反映出随年龄增长用户面部的变化,同时还可以反映出用户当前的生活习惯对用户容貌的影响,从而提供容貌预测的准确度以及用户的使用体验。It can be seen that in the first method, when the mobile phone predicts the user's facial image in M years, the user's health data can be combined with the user's health data to determine the age influence value X of the user's living habits. Furthermore, the mobile phone can actually predict the face image for the user in M+X years, so that the predicted face image can not only reflect the change of the user’s face with age, but also reflect the impact of the user’s current living habits on the user’s appearance , So as to provide the accuracy of facial prediction and user experience.
方式二Way two
在方式二中,手机也可向上述预测模型中输入第一图像中的面部图像,并向该预测模型中输入用户的当前年龄和需要预测的目标年龄。不同的是,在方式二中,该目标年龄是指用户选择的M年后的实际年龄或手机默认的一段时间后的用户年龄。In the second method, the mobile phone can also input the facial image in the first image into the prediction model, and input the current age of the user and the target age to be predicted into the prediction model. The difference is that in the second method, the target age refers to the actual age selected by the user after M years or the user's age after a certain period of time by default on the mobile phone.
例如,如果用户在图7所示的界面中选择预测5年后的面部图像,则如图11所示,手机可使用上述预测模型基于步骤S401获取的用户面部图像和用户当前的年龄(例如27岁),预测5年后(即用户32岁时)的面部图像901。For example, if the user chooses to predict facial images after 5 years in the interface shown in Figure 7, as shown in Figure 11, the mobile phone can use the above prediction model based on the user’s facial image obtained in step S401 and the user’s current age (for example, 27 Years old), predict the facial image 901 after 5 years (that is, when the user is 32 years old).
进而,手机可根据获取到的用户的健康数据,在预测出的面部图像901的基础上添加对应的容貌效果。例如,该容貌效果可以包括皮肤光泽的变化、肤色的变化、皱纹的变化、色斑的变化或脸型胖瘦的变化等一项或多项。Furthermore, the mobile phone can add a corresponding facial effect to the predicted facial image 901 based on the acquired health data of the user. For example, the appearance effect may include one or more of changes in skin luster, changes in skin color, changes in wrinkles, changes in pigmentation, or changes in face shape.
示例性的,手机或服务器可创建一个样本池,样本池中包含大量具有不同生活习惯特征的用户的面部图像。进而,手机或服务器可基于不同生活习惯特征的用户的面部图像进行深度学习,从而后建立不同的生活习惯特征与不同容貌效果之间的对应关系。例如,当用户有晚睡的生活习惯特征时,对应的会有黑眼圈这一容貌效果;当用户有暴饮暴食的生活习惯特征时,对应的会有面部肥胖这一容貌效果。Exemplarily, a mobile phone or a server can create a sample pool, which contains a large number of facial images of users with different lifestyle characteristics. Furthermore, the mobile phone or the server can perform in-depth learning based on the facial images of users with different life habit characteristics, so as to establish the correspondence between different life habit characteristics and different facial effects. For example, when the user has the habit of going to bed late, the corresponding appearance effect of dark circles; when the user has the habit of overeating, the corresponding appearance effect of facial obesity.
那么,手机从用户的健康数据中提取到对应的生活习惯特征后,可在本地或服务器中查询与本次提取到的生活习惯特征对应的容貌效果。进而,仍如图11所示,手机可在预测出的M年后用户的面部图像901上添加这些容貌效果,使得手机最终可得到基于用户当前的生活习惯为用户预测出的M年后用户的面部图像902。Then, after the mobile phone extracts the corresponding life habit characteristics from the user's health data, it can query the appearance effect corresponding to the life habit characteristics extracted this time in the local or server. Furthermore, as shown in FIG. 11, the mobile phone can add these facial effects to the predicted facial image 901 of the user after M years, so that the mobile phone can finally obtain the user's prediction for the user after M years based on the user's current living habits. Face image 902.
也就是说,如图12所示,在方式二中,手机中可设置两个模型,一个是基于年龄的面部图像预测模型1(即上述预测模型),另一个是生活习惯特征的面部图像预测模型2(即上述对应关系)。手机从第一图像中提取到用户的面部图像后,手机可向上述预测模型(即预测模型1)中输入第一图像中的面部图像、用户的当前年龄和需要预测的目标年龄。这样,手机使用该预测模型1可预测出随着时间推移用户在目标年龄时的面部图像901。进而,手机可向预测模型2中输入手机从n项健康数据中提取到的n维特征向量,该n维特征向量可反映用户的生活习惯。并且,还可向预测模型2中输入预测模型1输出的面部图像901。这样,在面部图像901的基础上,手机可基于用户的生活习惯使用预测模型2预测出M年后用户的面部图像902。That is to say, as shown in Figure 12, in the second mode, two models can be set in the mobile phone, one is the facial image prediction model 1 based on age (that is, the above prediction model), and the other is the facial image prediction based on lifestyle habits. Model 2 (ie the above-mentioned correspondence). After the mobile phone extracts the user's facial image from the first image, the mobile phone can input the facial image in the first image, the user's current age, and the target age to be predicted into the prediction model (ie, prediction model 1). In this way, the mobile phone uses the prediction model 1 to predict the facial image 901 of the user at the target age over time. Furthermore, the mobile phone can input the n-dimensional feature vector extracted by the mobile phone from n items of health data into the prediction model 2, and the n-dimensional feature vector can reflect the user's living habits. In addition, the facial image 901 output by the prediction model 1 can also be input to the prediction model 2. In this way, on the basis of the facial image 901, the mobile phone can use the prediction model 2 to predict the user's facial image 902 in M years based on the user's living habits.
可以看出,在方式二中,手机在预测用户M年后的面部图像时,可结合用户的健康数据在手机为用户预测出的M年后的面部图像中添加对应的容貌效果,使得预测出的面部图像既可以反映出随年龄增长用户面部的变化,同时还可以反映出用户的生活习惯对用户容貌的影响,从而提供容貌预测的准确度以及用户的使用体验。It can be seen that in the second method, when the mobile phone predicts the user's facial image in M years, it can combine the user's health data to add the corresponding facial image to the facial image predicted by the mobile phone for the user in M years, so that the prediction is The facial image can not only reflect the changes of the user’s face with age, but also reflect the impact of the user’s lifestyle on the user’s appearance, thereby providing the accuracy of facial prediction and the user’s experience.
方式三Way Three
在方式三中,手机或服务器可创建一个以年龄和生活习惯特征这两个参数为变量的预测模型,预测与不同年龄和不同生活习惯特征对应的面部老化图像。例如,服务器可建立不同年龄段的人脸照片库和不同生活习惯特征的人脸照片库。进而,服务器可通过深度学习算法对这两个人脸照片库中的图像特征进行挖掘和学习,从而建立年龄、生活习惯特征与面部图像之间相互影响的预测模型。In the third method, the mobile phone or the server can create a prediction model that uses the two parameters of age and life habit characteristics as variables to predict facial aging images corresponding to different ages and different life habit characteristics. For example, the server may establish a face photo library of different age groups and a face photo library of different living habits. Furthermore, the server can mine and learn the image features in the two face photo libraries through a deep learning algorithm, so as to establish a prediction model of the interaction between age, living habits, and facial images.
那么,当用户打开上述容貌预测功能后,手机可从步骤S401获取到的第一图像中提取用户的面部图像,并且,手机可从步骤S402获取到的健康数据中提取用户的生活习惯特征。进而,手机可将用户的面部图像、生活习惯特征以及需要预测的目标年龄输入至上述预测模型中,得到基于用户当前的生活习惯为用户预测出的M年后用户的面部图像。Then, after the user turns on the above facial prediction function, the mobile phone can extract the user's facial image from the first image obtained in step S401, and the mobile phone can extract the user's life habit characteristics from the health data obtained in step S402. Furthermore, the mobile phone can input the user's facial image, life habit characteristics, and the target age that needs to be predicted into the prediction model to obtain the user's facial image predicted for the user in M years based on the user's current life habits.
需要说明的是,上述方式一至方式三仅以举例的形式说明如何基于用户的生活习惯为用户预测M年后用户的面部图像。可以理解的是,本领域技术人员可以根据实际应用场景或实际经验设置根据用户的生活习惯预测M年后用户面部图像的具体算法、模型或实现方式,本申请实施例对此不做任何限制。It should be noted that the above-mentioned method 1 to method 3 only illustrate how to predict the facial image of the user in M years for the user based on the user's living habits. It is understandable that a person skilled in the art can set a specific algorithm, model, or implementation method for predicting a user's facial image in M years based on the user's living habits according to actual application scenarios or actual experience, and the embodiment of the present application does not impose any limitation on this.
S404、手机显示第二图像,第二图像中包含容貌预测后得到的用户的面部图像。S404. The mobile phone displays a second image, and the second image contains the facial image of the user obtained after facial appearance prediction.
经过上述步骤S403,手机对第一图像中用户的面部图像进行容貌预测后,可得到第二图像,第二图像中包含为用户预测出的一段时间后的面部图像(即第二面部图像)。第二图像中的面部图像与用户的健康数据相关联。那么,在步骤S404中,如图13A中的(a)所示,手机可在预测APP的界面1001中显示该第二图像1002,使得用户可以直观、生动的看到如果继续保持当前的生活习惯会对M年后的用户容貌造成怎样的影响。After the above step S403, after the mobile phone predicts the facial image of the user in the first image, a second image can be obtained. The second image includes the facial image predicted for the user after a period of time (ie, the second facial image). The facial image in the second image is associated with the user's health data. Then, in step S404, as shown in (a) of FIG. 13A, the mobile phone can display the second image 1002 in the interface 1001 of the prediction APP, so that the user can intuitively and vividly see if the current lifestyle habits are maintained What impact will it have on the appearance of users after M years?
示例性的,还可以在预测APP的界面1001中设置与当前时间对应的按钮1003以及与需要预测的M年(例如5年)后对应的按钮1004。如果检测到用户点击按钮1003,如图13A中的(b)所示,手机可显示本次预测用户容貌时获取的包含用户面部图像的第一图像1006。如果检测到用户点击按钮1004,如图13A中的(a)所示,手机可显示本次结合用户的健康数据预测出的用户面部5年后的第二图像1002。Exemplarily, a button 1003 corresponding to the current time and a button 1004 corresponding to the M years (for example, 5 years) in which the prediction is needed may also be set in the interface 1001 of the prediction APP. If it is detected that the user clicks on the button 1003, as shown in (b) in FIG. 13A, the mobile phone can display the first image 1006 containing the user's facial image obtained when predicting the user's appearance this time. If it is detected that the user clicks the button 1004, as shown in (a) of FIG. 13A, the mobile phone can display the second image 1002 of the user's face predicted this time in combination with the user's health data in 5 years.
示例性的,仍如图13A中的(a)所示,手机还可以在预测APP的界面1001中通过文字1005提示用户当前的生活习惯对用户容貌造成的具体影响,从而提醒用户及时调整不良的生活习惯。Exemplarily, as shown in (a) of Figure 13A, the mobile phone can also use text 1005 in the interface 1001 of the prediction APP to prompt the user of the specific impact of the current living habits on the user’s appearance, thereby reminding the user to adjust badly in time living habit.
又或者,如图13B所示,手机也可在预测APP的界面1001中同时显示当前用户的面部图像(即第一图像1006)和手机预测出的5年后用户的面部图像(即第二图像1002)。这样,用户可以直观的对比出当前容貌与5年后的容貌发生的改变,从而了解当前的不良生活习惯会对容貌造成怎样的具体影响。Or, as shown in Figure 13B, the mobile phone can also display the current user's facial image (ie the first image 1006) and the user's facial image (ie the second image) predicted by the mobile phone 5 years later in the interface 1001 of the prediction APP. 1002). In this way, the user can intuitively compare the changes between the current appearance and the appearance after 5 years, thereby understanding the specific impact of the current bad living habits on the appearance.
在一些实施例中,如图14中的(a)所示,手机在显示第二图像1002时,可将用户的不良生活习惯对面部造成的具体问题标记在用户面部的具体位置。例如,如果用户的健康数据说明用户有晚睡的生活习惯,而这一生活习惯会增加面部皱纹,那么,手机在显示第二图像1002时可在用户面部的皱纹区域增加标记1101,以提示用户晚睡会加剧面部的皱纹问题。这样,用户可以直观的了解到当前的不良生活习惯会具体对容貌造成怎样的影响,从而提醒用户及时调整不良生活习惯。In some embodiments, as shown in (a) of FIG. 14, when the mobile phone displays the second image 1002, the specific problem caused by the user's bad living habits may be marked on the specific location of the user's face. For example, if the user’s health data indicates that the user has a living habit of going to bed late, and this lifestyle will increase facial wrinkles, then the mobile phone can add a mark 1101 to the wrinkle area of the user’s face when displaying the second image 1002 to remind the user Sleeping late can exacerbate facial wrinkles. In this way, the user can intuitively understand how the current bad life habits will specifically affect the appearance, so as to remind the user to adjust the bad life habits in time.
进一步地,如果检测到用户点击上述标记1101,如图14中的(b)所示,手机可通过文字、语音等方式提示用户调整相关不良生活习惯的具体方法或建议,从而帮助、指导用户尽快调整不良生活习惯。Further, if it is detected that the user clicks on the above-mentioned mark 1101, as shown in (b) in Figure 14, the mobile phone can prompt the user to adjust the specific method or suggestion of related bad living habits through text, voice, etc., so as to help and guide the user as soon as possible Adjust bad habits.
在一些实施例中,手机还可以向用户展示经过不同时间后用户面部容貌的变化情况。例如,如图15中的(a)或(b)所示,手机可在预测APP的界面1001中设置老化进度条1201和滑块1202。用户可在界面1001中拖动滑块1202在老化进度条1201上滑动。例如,如图15中的(a)所示,如果检测到用户将滑块1202拖动至老化进度条1201的中间点A,则手机可在界面1001中显示为用户预测出的5年后的面部图像。如图15中的(b)所示,如果检测到用户将滑块1202拖动至老化进度条1201的终点B,则手机可在界面1001中显示为用户预测出的10年后的面部图像。In some embodiments, the mobile phone can also show the user how the user's facial appearance changes after different times. For example, as shown in (a) or (b) of FIG. 15, the mobile phone can set an aging progress bar 1201 and a slider 1202 in the interface 1001 of the prediction APP. The user can drag the slider 1202 in the interface 1001 to slide on the aging progress bar 1201. For example, as shown in Figure 15 (a), if it is detected that the user drags the slider 1202 to the middle point A of the aging progress bar 1201, the mobile phone can display the user’s predicted 5 years later in the interface 1001 Facial image. As shown in (b) of FIG. 15, if it is detected that the user drags the slider 1202 to the end point B of the aging progress bar 1201, the mobile phone can display the facial image predicted by the user in 10 years on the interface 1001.
其中,手机基于用户的健康数据预测M年后用户面部图像的具体方法可参见步骤S403的相关描述,故此处不再赘述。或者,手机预测出M年后用户的面部图像A后,如果手机需要预测M+T年后用户的面部图像B,则手机可对面部图像A中的各个像素单元的像素值乘以一个对应的比例系数w,从而得到M+T年后用户的面部图像B。Among them, the specific method for the mobile phone to predict the user's facial image after M years based on the user's health data can be referred to the related description of step S403, so it will not be repeated here. Or, after the mobile phone predicts the user's facial image A after M years, if the mobile phone needs to predict the user's facial image B after M+T years, the mobile phone can multiply the pixel value of each pixel unit in the facial image A by a corresponding The scale factor w is used to obtain the facial image B of the user after M+T years.
也就是说,在用户拖动滑块1201在老化进度条1201上滑动时,手机可按照时间顺序显示在不同时间下、基于当前用户的生活习惯为用户预测出的面部图像,使用户可以动态的感受到保持当前的生活习惯时面部容貌随时间的变化情况。In other words, when the user drags the slider 1201 to slide on the aging progress bar 1201, the mobile phone can display facial images predicted for the user based on the current user’s living habits at different times in chronological order, so that the user can dynamically Feel the changes in facial appearance over time while maintaining current lifestyle habits.
在另一些实施例中,手机还可以显示当用户将不良的生活习惯调整为健康的生活习惯后,手机为用户预测出的M年后的面部图像。示例性的,如图16中的(a)所示,手机还可以在预测APP的界面1001中设置健康生活的按钮1301。这样,手机在显示基于用户当前的生活习惯预测出的用户5年后的第二图像1002时,如果检测到用户点击按钮1301,则手机可基于第一图像和预设的标准健康数据预测用户5年后的面部图像1302。该预设的标准健康数据可以为手机或服务器统计出的生活习惯较为健康的用户的健康数据。In some other embodiments, the mobile phone can also display the facial image predicted by the mobile phone for the user M years later after the user adjusts the bad lifestyle habits to healthy lifestyle habits. Exemplarily, as shown in (a) of FIG. 16, the mobile phone may also set a healthy life button 1301 in the interface 1001 of the prediction APP. In this way, when the mobile phone displays the second image 1002 of the user 5 years later based on the user’s current living habits, if it detects that the user clicks the button 1301, the mobile phone can predict the user 5 based on the first image and preset standard health data. Face image 1302 after the year. The preset standard health data may be health data of users with relatively healthy living habits collected by mobile phones or servers.
这样,响应于用户点击按钮1301的操作,如图16中的(b)所示,手机可将基于该标准健康数据为用户预测的5年后的面部图像1302显示在预测APP的界面1001中。可以看出,用户通过对比这两幅图像可以直观的获知当前的生活习惯与健康的生活习惯对日后容貌造成的影响,从而提醒并督促用户建立更加健康的生活习惯。In this way, in response to the user's operation of clicking the button 1301, as shown in (b) of FIG. 16, the mobile phone can display the facial image 1302 predicted for the user in 5 years based on the standard health data in the interface 1001 of the prediction APP. It can be seen that by comparing the two images, the user can intuitively know the impact of the current lifestyle and healthy lifestyle on the future appearance, thereby reminding and urging the user to establish a healthier lifestyle.
在另一些实施例中,手机还可以向用户展示在不同时间长度下保持健康的生活习惯对用户面部容貌带来的影响。例如,如图17中的(a)-(b)所示,手机在预测APP的界面1001中设置有第一进度条1401和第二进度条1402。第一进度条1401用于指示当用户保持当前的生活习惯时的老化进度,第二进度条1402用于指示当用户保持标准的健康生活习惯时的老化 进度。用户可以拖动滑块1403在第一进度条1401和第二进度条1402上滑动。In some other embodiments, the mobile phone can also show the user the effect of maintaining a healthy lifestyle for different lengths of time on the facial appearance of the user. For example, as shown in (a)-(b) in FIG. 17, the mobile phone is provided with a first progress bar 1401 and a second progress bar 1402 in the interface 1001 of the prediction APP. The first progress bar 1401 is used to indicate the aging progress when the user maintains the current life habit, and the second progress bar 1402 is used to indicate the aging progress when the user maintains the standard healthy life habit. The user can drag the slider 1403 to slide on the first progress bar 1401 and the second progress bar 1402.
例如,如图17中的(a)所示,用户拖动滑块1403在第一进度条1401上滑动时,手机可基于当前用户的生活习惯,按照时间顺序显示在不同时间下为用户预测出的面部图像,使用户可以动态的感受到保持当前的生活习惯时面部容貌随时间的变化情况。如图17中的(b)所示,用户拖动滑块1403在第二进度条1402上滑动时,手机可基于标准的健康生活习惯,按照时间顺序显示在不同时间下为用户预测出的面部图像,使用户可以动态的感受到改善当前的不良生活习惯时面部容貌随时间的变化情况。For example, as shown in Figure 17(a), when the user drags the slider 1403 to slide on the first progress bar 1401, the mobile phone can display the predictions for the user at different times based on the current user’s living habits. The facial image of the user can dynamically feel the changes of facial appearance over time while maintaining the current living habits. As shown in Figure 17(b), when the user drags the slider 1403 to slide on the second progress bar 1402, the mobile phone can display the predicted faces for the user at different times in chronological order based on standard healthy living habits The image allows users to dynamically feel the changes in facial appearance over time when improving current bad habits.
在另一些实施例中,手机还可以保存最近一次或多次为用户预测出的面部图像,以及对应的生活习惯特征。例如,手机在2018年10月1日基于用户当时的生活习惯为用户预测了3个月后的面部图像A。手机可保存该面部图像A以及与其对应的生活习惯特征1。如果在2019年1月1日左右检测到用户又一次打开了容貌预测功能,手机可获取当前用户的面部图像B以及用户当前的生活习惯特征2。进而,手机通过对比面部图像A与面部图像B,以及对比生活习惯特征1和生活习惯特征2,可分析出用户生活习惯的改变对用户的面部容貌有哪些具体影响。In other embodiments, the mobile phone may also save the facial images predicted for the user one or more times recently, and the corresponding living habits characteristics. For example, on October 1, 2018, the mobile phone predicted the facial image A for the user 3 months later based on the user's living habits at that time. The mobile phone can save the facial image A and the corresponding lifestyle feature 1. If it is detected around January 1, 2019 that the user has turned on the facial prediction function again, the mobile phone can obtain the current user's facial image B and the user's current lifestyle characteristics2. Furthermore, by comparing the facial image A and the facial image B, and comparing the life habit feature 1 and the life habit feature 2, the mobile phone can analyze the specific impact of the change in the user's life habit on the user's facial appearance.
那么,如图18中的(a)所示,手机在预测APP的界面1501中显示本次获取到的面部图像B时,还可提示用户最近一段时间生活习惯的改变情况,以及生活习惯的改变对容貌的改变。另外,如果检测到用户点击界面1501中的历史预测记录按钮1502,则如图18中的(b)所示,手机可显示基于用户的历史生活习惯曾经为用户预测出的同时期的面部图像(例如上述面部图像A),使用户可以直观的看到自身生活习惯的改变对容貌的影响。Then, as shown in (a) of Figure 18, when the mobile phone displays the facial image B acquired this time in the interface 1501 of the prediction APP, it can also prompt the user of the change in living habits and the change in living habits in the recent period. Changes to appearance. In addition, if it is detected that the user clicks on the historical prediction record button 1502 in the interface 1501, as shown in (b) in Figure 18, the mobile phone can display facial images of the same period that have been predicted for the user based on the user's historical living habits ( For example, the aforementioned facial image A) allows users to intuitively see the impact of changes in their own habits on their appearance.
在另一些实施例中,手机基于用户的健康数据除了可以预测用户面部容貌的变化外,手机还可以预测用户体形的变化,例如体重的变化、肥胖部位、是否驼背、是否为O型腿等。例如,手机或服务器可使用GAN训练出不同生活习惯特征与不同体形数据之间的对应关系。那么,手机在显示预测前后的用户面部图像时,还可以将用户面部图像加载在对应的体形模板上展示给用户。In other embodiments, in addition to predicting changes in the user's facial appearance based on the user's health data, the mobile phone can also predict changes in the user's body shape, such as changes in weight, obesity, hunchback, and O-legs. For example, a mobile phone or a server can use GAN to train the correspondence between different lifestyle characteristics and different body shape data. Then, when the mobile phone displays the user's facial image before and after the prediction, it can also load the user's facial image on the corresponding figure template and display it to the user.
以基于用户体重使用GAN预测用户体形变化举例,如图19所示,可向GAN的生成模型中输入包含用户真实体形的真实图像和目标体重标签,生成模型可根据用户当前的真实体形和目标体重标签生成与目标体重对应的体形预测图像,而判别模型可根据用户当前的真实体形,目标体重标签以及生成模型输出的体形预测图像,判别生成模型输出的体形预测图像是真实的图像还是生成的图像。那么,整个GAN通过不断地迭代和训练,当生成模型和判别模型达到平衡时,生成模型可生成足以“以假乱真”的图片,使判别模型无法判断生成模型输出的面部图像的真假。此时,生成模型输出的面部图像即为用户的体重为目标体重时对应的体形模板图像。Take the example of using GAN to predict the change of the user's body shape based on the user's weight. As shown in Figure 19, the real image containing the user's real body shape and the target weight label can be input into the GAN generation model. The generated model can be based on the user's current real body shape and target weight The tag generates a body shape prediction image corresponding to the target weight, and the discriminant model can determine whether the body shape prediction image output by the generation model is a real image or a generated image based on the user's current real body shape, target weight label, and the body shape prediction image output by the generation model . Then, through continuous iteration and training of the entire GAN, when the generative model and the discriminant model reach a balance, the generative model can generate images that are "false and true", so that the discriminant model cannot judge the authenticity of the facial image output by the generative model. At this time, the facial image output by the generating model is the body shape template image corresponding to the user's weight as the target weight.
示例性的,如图20中的(a)所示,手机在预测APP的界面1501中显示本次获取到的面部图像1601时,可将面部图像1601加载在预设的体形模板1602上进行显示。如果检测到用户点击5年后的按钮1600,则手机可基于用户的健康数据预测出用户5年后的面部图像1603,并且,手机还可以基于用户的健康数据预测出用户5年后的体形数据。进而,如图20中的(b)所示,手机可将预测出的面部图像1603加载在与预测出的体形数据对应的体形模板1604上进行显示。这样,用户可以直观、生动的了解自己的生活习惯对日后体形造成的影响,从而提醒用户及时调整不良生活习惯。Exemplarily, as shown in Figure 20(a), when the mobile phone displays the facial image 1601 acquired this time in the interface 1501 of the prediction APP, the facial image 1601 can be loaded on the preset body shape template 1602 for display . If it is detected that the user clicks the button 1600 in 5 years, the mobile phone can predict the user's facial image 1603 based on the user's health data, and the mobile phone can also predict the user's body shape data in 5 years based on the user's health data . Furthermore, as shown in (b) of FIG. 20, the mobile phone may load the predicted facial image 1603 on the body shape template 1604 corresponding to the predicted body shape data for display. In this way, the user can intuitively and vividly understand the influence of his own life habits on his body shape in the future, thereby reminding the user to adjust bad life habits in time.
在本申请另一些实施例中,手机也可以根据获取到健康数据主动向用户推送预测出的面部图像。例如,如图21所示,手机获取到用户最近预设时间(例如最近一个月)内的健康数 据后,手机可确定该健康数据是否满足预设条件。该预设条件可以是健康数据大于某一预设值或小于某一预设值。例如,如果在用户的健康数据中检测到用户连续一周的睡眠时间不足5小时,手机可确定该健康数据不满足预设条件。此时,手机可自动打开上述容貌预测功能获取用户真实的面部图像(即第一面部图像),进而,通过执行上述步骤S403-S404,手机可预测出用户M年后的面部图像(即第二面部图像)。由于用户的健康数据不满足预设条件,说明用户当前的生活习惯不健康,那么,手机基于该健康数据为用户预测出的第二面部图像为用户面部老化后的结果。In some other embodiments of the present application, the mobile phone can also actively push the predicted facial image to the user based on the acquired health data. For example, as shown in Figure 21, after the mobile phone obtains the user's health data within the most recent preset time (for example, the most recent month), the mobile phone can determine whether the health data meets the preset conditions. The preset condition may be that the health data is greater than a certain preset value or less than a certain preset value. For example, if it is detected in the user's health data that the user's sleep time for a continuous week is less than 5 hours, the mobile phone can determine that the health data does not meet the preset conditions. At this time, the mobile phone can automatically turn on the aforementioned facial prediction function to obtain the user's real facial image (i.e., the first facial image). Furthermore, by performing the above steps S403-S404, the mobile phone can predict the user's facial image (i.e., the first facial image) in M years. Two facial images). Since the user's health data does not meet the preset conditions, indicating that the user's current living habits are unhealthy, the second facial image predicted by the mobile phone for the user based on the health data is the result of the user's facial aging.
相应的,如果手机可确定用户的健康数据满足上述预设条件,手机也可自动打开上述容貌预测功能获取用户真实的面部图像(即第一面部图像),并通过执行上述步骤S403-S404预测出用户M年后的面部图像(即第三面部图像)。不同的是,由于用户的健康数据满足上述预设条件,说明用户当前的生活习惯较为健康,因此,手机基于该健康数据为用户预测出的第三面部图像为用户面部去老化后的结果。Correspondingly, if the mobile phone can determine that the user’s health data satisfies the above preset conditions, the mobile phone can also automatically turn on the above facial prediction function to obtain the user’s real facial image (ie, the first facial image), and predict by performing the above steps S403-S404 The facial image of the user M years later (that is, the third facial image) is displayed. The difference is that since the user's health data meets the above preset conditions, it indicates that the user's current living habits are relatively healthy. Therefore, the third facial image predicted by the mobile phone for the user based on the health data is the result of the user's face de-aging.
仍如图21所示,手机基于用户的健康数据预测出M年后用户的第二面部图像或第三面部图像后,手机可主动向用户推送容貌预测完成的通知消息。进而,如果检测到用户打开该通知消息,则手机可在预测APP的界面中显示为用户预测出的M年后的第二面部图像或第三面部图像,从而向用户提醒当前的生活习惯对容貌造成的具体影响。当然,在显示预测出的第二面部图像或第三面部图像时,手机还可以显示上述第一面部图像,向用户展示容貌预测前后的对比效果。又例如,在显示预测出的第二面部图像或第三面部图像时,手机还可以显示与第二面部图像或第三面部图像对应的体型模板,从而提示用户当前的生活习惯对体形的具体影响,本申请实施例对此不做任何限制。As still shown in FIG. 21, after the mobile phone predicts the second facial image or the third facial image of the user in M years based on the user's health data, the mobile phone can actively push a notification message that the facial prediction is completed to the user. Furthermore, if it is detected that the user has opened the notification message, the mobile phone can display the second facial image or the third facial image predicted by the user in M years in the interface of the prediction APP, thereby reminding the user of the current lifestyle habits and appearance. The specific impact caused. Of course, when displaying the predicted second facial image or the third facial image, the mobile phone may also display the above-mentioned first facial image to show the user the comparison effect before and after the facial prediction. For another example, when displaying the predicted second facial image or the third facial image, the mobile phone can also display the body shape template corresponding to the second facial image or the third facial image, thereby prompting the user of the specific influence of the current living habits on the body shape The embodiment of this application does not impose any restriction on this.
需要说明的是,除了预测用户的面部容貌和体形外,手机还可以基于用户的健康数据预测用户的患病风险、潜在患病部位等健康问题,提醒用户对当前的不良生活习惯重点关注并及时改善,本申请实施例对此不做任何限制。It should be noted that in addition to predicting the user’s facial appearance and body shape, the mobile phone can also predict the user’s health risks, potential diseased parts and other health problems based on the user’s health data, reminding the user to pay attention to current bad habits and promptly For improvement, the embodiment of this application does not impose any restriction on this.
如图22所示,本申请实施例公开了一种电子设备,包括:触摸屏2201,所述触摸屏2201包括触敏表面2206和显示屏2207;一个或多个处理器2202;存储器2203;一个或多个应用程序(未示出);以及一个或多个计算机程序2204,上述各器件可以通过一个或多个通信总线2205连接。其中,该一个或多个计算机程序2204被存储在上述存储器2203中并被配置为被该一个或多个处理器2202执行,该一个或多个计算机程序2204包括指令,上述指令可以用于执行上述应实施例中的各个步骤。As shown in FIG. 22, an embodiment of the present application discloses an electronic device, including: a touch screen 2201, the touch screen 2201 includes a touch-sensitive surface 2206 and a display screen 2207; one or more processors 2202; a memory 2203; one or more One application program (not shown); and one or more computer programs 2204. The above-mentioned devices can be connected through one or more communication buses 2205. Wherein, the one or more computer programs 2204 are stored in the aforementioned memory 2203 and configured to be executed by the one or more processors 2202, and the one or more computer programs 2204 include instructions, and the aforementioned instructions can be used to execute the aforementioned Each step in the embodiment should be implemented.
示例性的,上述处理器2202具体可以为图1所示的处理器110,上述存储器2203具体可以为图1所示的内部存储器121和/或外部存储器120,上述显示屏2207具体可以为图1所示的显示屏194,上述传感器2208具体可以为图1所示的传感器模块180中的一个或多个传感器,上述触敏表面2206具体可以为图1所示的传感器模块180中的触摸传感器180K,本申请实施例对此不做任何限制。Exemplarily, the foregoing processor 2202 may specifically be the processor 110 shown in FIG. 1, the foregoing memory 2203 may specifically be the internal memory 121 and/or the external memory 120 shown in FIG. 1, and the foregoing display screen 2207 may specifically be FIG. As shown in the display screen 194, the aforementioned sensor 2208 may specifically be one or more sensors in the sensor module 180 shown in FIG. 1, and the aforementioned touch-sensitive surface 2206 may specifically be the touch sensor 180K in the sensor module 180 shown in FIG. The embodiment of this application does not impose any restriction on this.
在一些实施例中,本申请还提供了一种图形用户界面(GUI),该图形用户界面可存储在电子设备中。示例性的,该电子设备可以为图1或图22所示的电子设备。In some embodiments, this application also provides a graphical user interface (GUI), which can be stored in an electronic device. Exemplarily, the electronic device may be the electronic device shown in FIG. 1 or FIG. 22.
在一些实施例中,上述图形用户界面包括:显示在触摸屏上的第一GUI,第一GUI中包括容貌预测功能的按钮;例如,第一GUI可以为图5所示的预测应用的界面501,界面501中包括容貌预测功能的按钮502。如果检测到用户对所述按钮执行了触摸事件,电子设备可获取用户的健康数据以及当前用户真实的第一面部图像,例如图6所示的图像601。进而,电子设备可基于该第一面部图像和健康数据预测出用户M年后的第二面部图像。那么,上述 图形用户界面还可以包括在触摸屏上显示的第二GUI,例如,第二GUI可以为图13B所示的界面1001,第二GUI中包括用户的第一面部图像1006和第二面部图像1002,或者,第二GUI可以为图13A中的(a)所示的界面1001,第二GUI中包括用户的第二面部图像1002。In some embodiments, the above-mentioned graphical user interface includes: a first GUI displayed on the touch screen, and the first GUI includes buttons for a facial prediction function; for example, the first GUI may be the interface 501 of the prediction application shown in FIG. 5, The interface 501 includes a button 502 for the facial prediction function. If it is detected that the user has performed a touch event on the button, the electronic device may obtain the user's health data and the current user's real first facial image, such as the image 601 shown in FIG. 6. Furthermore, the electronic device may predict the second facial image of the user in M years based on the first facial image and health data. Then, the foregoing graphical user interface may also include a second GUI displayed on the touch screen. For example, the second GUI may be the interface 1001 shown in FIG. 13B. The second GUI includes the user's first facial image 1006 and second facial image. Image 1002, or, the second GUI may be the interface 1001 shown in (a) of FIG. 13A, and the second GUI includes the user's second facial image 1002.
在另一些实施例中,上述图形用户界面包括:显示在触摸屏上的第一GUI,第一GUI中包括容貌预测完成的通知消息;例如,电子设备获取到用户的健康数据后,如果检测到健康数据不满足预设条件,则电子设备可基于用户的第一面部图像和健康数据预测出用户M年后的老化的第二面部图像;或者,如果检测到健康数据满足预设条件,则电子设备可基于用户的第一面部图像和健康数据预测出用户M年后的去老化的第二面部图像。进而,电子设备可显示上述通知消息;如果检测到用户对所述通知消息执行了触摸事件,电子设备可在触摸屏上显示第二GUI,第二GUI中包括为用户预测出的第二面部图像。当然,第二GUI中还可以包括用户的第一面部图像等,本申请实施例对此不做任何限制。In other embodiments, the above-mentioned graphical user interface includes: a first GUI displayed on the touch screen, and the first GUI includes a notification message that the appearance prediction is completed; for example, after the electronic device obtains the user's health data, if it detects health If the data does not meet the preset condition, the electronic device can predict the user's aging second facial image in M years based on the user's first facial image and health data; or, if the health data is detected to meet the preset condition, the electronic device The device can predict the de-aging second facial image of the user in M years based on the user's first facial image and health data. Furthermore, the electronic device may display the aforementioned notification message; if it is detected that the user performs a touch event on the notification message, the electronic device may display a second GUI on the touch screen, the second GUI including a second facial image predicted for the user. Of course, the second GUI may also include the user's first facial image, etc., which is not limited in the embodiment of the present application.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Through the description of the above embodiments, those skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated as required. It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. For the specific working process of the system, device, and unit described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not repeated here.
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。The functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application essentially or the part that contributes to the prior art or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage The medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited to this. Any changes or substitutions within the technical scope disclosed in the embodiments of the present application shall be covered by this Within the protection scope of the application embodiments. Therefore, the protection scope of the embodiments of the present application should be subject to the protection scope of the claims.

Claims (21)

  1. 一种容貌预测方法,其特征在于,包括:A face prediction method, characterized in that it includes:
    电子设备获取第一图像,所述第一图像中包括用户的第一面部图像;The electronic device acquires a first image, where the first image includes a first facial image of the user;
    所述电子设备获取所述用户在预设时间内的健康数据,所述健康数据包括所述用户的运动数据、睡眠数据、营养摄入数据或使用所述电子设备的时长数据中的至少一种;The electronic device acquires health data of the user within a preset time, and the health data includes at least one of exercise data, sleep data, nutritional intake data, or duration data of the electronic device of the user ;
    所述电子设备基于所述健康数据和所述第一面部图像对所述用户容貌进行预测,得到第二图像,所述第二图像中包括所述用户的第二面部图像;The electronic device predicts the appearance of the user based on the health data and the first facial image to obtain a second image, and the second image includes the second facial image of the user;
    所述电子设备显示第一界面,所述第一界面中包括所述第二面部图像,或者,所述第一界面中包括所述第一面部图像和所述第二面部图像,所述第一面部图像与所述第二面部图像不同。The electronic device displays a first interface, the first interface includes the second facial image, or, the first interface includes the first facial image and the second facial image, and the first interface includes the first facial image and the second facial image. A facial image is different from the second facial image.
  2. 根据权利要求1所述的方法,其特征在于,在所述电子设备获取所述第一图像之前,还包括:The method according to claim 1, wherein before the electronic device acquires the first image, further comprising:
    所述电子设备显示预测应用的第二界面,所述第二界面中包括容貌预测功能的按钮;The electronic device displays a second interface of the prediction application, and the second interface includes a button for a facial prediction function;
    其中,所述电子设备获取所述第一图像,包括:Wherein, acquiring the first image by the electronic device includes:
    响应于用户点击所述按钮的操作,所述电子设备使用摄像头获取所述第一图像;或者,In response to the user's operation of clicking the button, the electronic device uses a camera to obtain the first image; or,
    响应于用户点击所述按钮的操作,所述电子设备从相册应用中获取一张照片作为所述第一图像。In response to the user's operation of clicking the button, the electronic device obtains a photo from the photo album application as the first image.
  3. 根据权利要求1所述的方法,其特征在于,在所述电子设备获取所述第一图像之前,还包括:The method according to claim 1, wherein before the electronic device acquires the first image, further comprising:
    所述电子设备确定所述健康数据满足预设条件,所述预设条件包括所述健康数据大于预设值,或者,所述健康数据小于预设值。The electronic device determines that the health data satisfies a preset condition, and the preset condition includes that the health data is greater than a preset value, or the health data is less than a preset value.
  4. 根据权利要求3所述的方法,其特征在于,在所述电子设备基于所述健康数据和所述第一面部图像对所述用户容貌进行预测,得到第二图像之后,还包括:The method according to claim 3, wherein after the electronic device predicts the user's appearance based on the health data and the first facial image to obtain the second image, the method further comprises:
    所述电子设备显示容貌预测完成的通知消息;The electronic device displays a notification message that the appearance prediction is completed;
    其中,所述电子设备显示所述第一界面,包括:Wherein, the electronic device displaying the first interface includes:
    响应于用户打开所述通知消息的操作,所述电子设备打开所述预测应用并显示所述预测应用的所述第一界面。In response to the user's operation of opening the notification message, the electronic device opens the prediction application and displays the first interface of the prediction application.
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备基于所述健康数据和所述第一面部图像对所述用户容貌进行预测,得到第二图像,包括:The method according to any one of claims 1-4, wherein the electronic device predicts the user's appearance based on the health data and the first facial image to obtain a second image, comprising :
    所述电子设备根据所述健康数据确定对应的年龄影响值K,所述年龄影响值是指与用户当前年龄呈正相关或负相关的偏差值;The electronic device determines the corresponding age influence value K according to the health data, and the age influence value refers to a deviation value that is positively or negatively correlated with the user's current age;
    所述电子设备基于所述第一面部图像预测M+K年后所述用户的所述第二面部图像,得到包含所述第二面部图像的所述第二图像,其中,M为默认值或用户设置的值。The electronic device predicts the second facial image of the user in M+K years based on the first facial image to obtain the second image including the second facial image, where M is the default value Or the value set by the user.
  6. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备基于所述健康数据和所述第一面部图像对所述用户容貌进行预测,得到第二图像,包括:The method according to any one of claims 1-4, wherein the electronic device predicts the user's appearance based on the health data and the first facial image to obtain a second image, comprising :
    所述电子设备基于所述第一面部图像预测M年后所述用户的第三面部图像,其中,M为默认值或用户设置的值;The electronic device predicts a third facial image of the user after M years based on the first facial image, where M is a default value or a value set by the user;
    所述电子设备根据所述健康数据在所述第三面部图像上添加对应的容貌效果,得到M年后所述用户的所述第二面部图像,所述容貌效果包括年轻效果或衰老效果。The electronic device adds a corresponding facial effect to the third facial image according to the health data to obtain the second facial image of the user after M years, and the facial effect includes a youthful effect or an aging effect.
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一界面中包括所述第二面部图像和第一切换按钮;The method according to any one of claims 1-6, wherein the first interface includes the second facial image and a first switching button;
    其中,在所述电子设备显示第一界面之后,还包括:Wherein, after the electronic device displays the first interface, it further includes:
    响应于用户点击所述第一切换按钮的操作,所述电子设备将显示的所述第二面部图像切换为所述第一面部图像。In response to the user's operation of clicking the first switching button, the electronic device switches the displayed second facial image to the first facial image.
  8. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一界面中包括所述第二面部图像和第二切换按钮;The method according to any one of claims 1-6, wherein the first interface includes the second facial image and a second switch button;
    其中,在所述电子设备显示第一界面之后,还包括:Wherein, after the electronic device displays the first interface, it further includes:
    响应于用户点击所述第二切换按钮的操作,所述电子设备将显示的所述第二图像切换为标准面部图像,所述标准面部图像与预设的标准健康数据对应。In response to the user's operation of clicking the second switching button, the electronic device switches the displayed second image to a standard facial image, the standard facial image corresponding to preset standard health data.
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述电子设备显示第一界面,包括:The method according to any one of claims 1-8, wherein the electronic device displaying the first interface comprises:
    所述电子设备在所述第一界面的第二面部图像中标记与所述健康数据对应的容貌变化。The electronic device marks the appearance change corresponding to the health data in the second facial image of the first interface.
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述第一界面中还包括调整用户生活习惯的方法或建议。The method according to any one of claims 1-9, wherein the first interface further includes methods or suggestions for adjusting the user's living habits.
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述第一界面中还包含老化进度条和滑块;所述第二面部图像包括用户面部的第一预测图像和第二预测图像;The method according to any one of claims 1-10, wherein the first interface further includes an aging progress bar and a slider; the second facial image includes a first predicted image and a second facial image of the user's face. Two prediction images;
    其中,所述电子设备在所述第一界面中显示所述第二面部图像,包括:Wherein, the electronic device displaying the second facial image on the first interface includes:
    若检测到所述滑块被拖动至所述老化进度条的第一位置,则所述电子设备显示与所述第一位置对应的第一预测图像,所述第一预测图像为预测出的经第一时间段后用户的面部图像;If it is detected that the slider is dragged to the first position of the aging progress bar, the electronic device displays a first predicted image corresponding to the first position, and the first predicted image is the predicted The facial image of the user after the first time period;
    若检测到所述滑块被拖动至所述老化进度条的第二位置,则所述电子设备显示与所述第二位置对应的第二预测图像,所述第二预测图像为预测出的经第二时间段后用户的面部图像。If it is detected that the slider is dragged to the second position of the aging progress bar, the electronic device displays a second predicted image corresponding to the second position, and the second predicted image is the predicted The facial image of the user after the second time period.
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,所述第二图像中还包括为所述用户预测出的一段时间后所述用户的体形模板,所述体形模板与所述健康数据对应,所述体形模板包括变胖模板或变瘦模板。The method according to any one of claims 1-11, wherein the second image further includes a figure template of the user predicted for the user after a period of time, and the figure template is related to Corresponding to the health data, the body shape template includes a fattening template or a thinning template.
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述电子设备获取用户在预设时间内的健康数据,包括:The method according to any one of claims 1-12, wherein the electronic device acquiring the user's health data within a preset time comprises:
    所述电子设备从可穿戴设备中获取所述用户在预设时间内的健康数据。The electronic device obtains the user's health data within a preset time from the wearable device.
  14. 一种容貌预测方法,其特征在于,包括:A face prediction method, characterized in that it includes:
    电子设备获取用户在预设时间内的健康数据,所述健康数据包括所述用户的运动数据、睡眠数据、营养摄入数据或使用电子设备的时长数据中的至少一种;The electronic device obtains the user's health data within a preset time, where the health data includes at least one of the user's exercise data, sleep data, nutritional intake data, or time length data of using the electronic device;
    所述电子设备获取第一图像,所述第一图像中包括所述用户的第一面部图像,所述第一图像为所述电子设备使用摄像头获取的,或者所述电子设备从相册应用中获取的一张照片;The electronic device acquires a first image, the first image includes a first facial image of the user, the first image is acquired by the electronic device using a camera, or the electronic device is from an album application A photo obtained;
    若所述健康数据不满足预设条件,则所述电子设备在一个界面中显示所述第一面部图像和为所述用户预测出的一段时间后的第二面部图像,所述第二面部图像为所述第一面部图像老化后的预测图像;If the health data does not meet the preset condition, the electronic device displays the first facial image and the second facial image predicted for the user after a period of time in an interface, and the second facial image The image is a predicted image after the first facial image is aged;
    若所述健康数据满足所述预设条件,则所述电子设备在一个界面中显示所述第一面部图像和为所述用户预测出的一段时间后的第三面部图像,所述第三面部图像为所述第一面部图像去老化后的预测图像。If the health data meets the preset condition, the electronic device displays the first facial image and a third facial image predicted for the user after a period of time in an interface, and the third facial image The facial image is a predicted image after the first facial image is de-aging.
  15. 根据权利要求14所述的方法,其特征在于,The method of claim 14, wherein:
    若所述健康数据不满足所述预设条件,则所述界面中还包括承载所述第一面部图像的第一体形模板以及承载所述第二面部图像的第二体形模板,所述第二体型模板是所述第一体型模板变胖的结果;If the health data does not meet the preset condition, the interface further includes a first figure template that carries the first facial image and a second figure template that carries the second facial image. The second body shape template is the result of the first body shape template becoming fat;
    若所述健康数据满足所述预设条件,则所述界面中还包括承载所述第一面部图像的第一体形模板以及承载所述第三面部图像的第三体形模板,所述第三体型模板是所述第一体型模板变瘦的结果。If the health data meets the preset condition, the interface further includes a first figure template that carries the first facial image and a third figure template that carries the third facial image. The trisomy template is the result of the thinning of the first body template.
  16. 根据权利要求14或15所述的方法,其特征在于,所述一段时间为M年,M为默认值或用户设置的值;The method according to claim 14 or 15, wherein the period of time is M years, and M is a default value or a value set by a user;
    若所述健康数据不满足所述预设条件,则所述方法还包括:If the health data does not meet the preset condition, the method further includes:
    所述电子设备基于所述健康数据和所述M年,对所述第一面部图像进行老化处理,得到M年后所述用户的所述第二面部图像;The electronic device performs aging processing on the first facial image based on the health data and the M years to obtain the second facial image of the user after M years;
    若所述健康数据满足所述预设条件,则所述方法还包括:If the health data meets the preset condition, the method further includes:
    所述电子设备基于所述健康数据和所述M年,对所述第一面部图像进行去老化处理,得到M年后所述用户的所述第三面部图像。The electronic device performs de-aging processing on the first facial image based on the health data and the M years to obtain the third facial image of the user after M years.
  17. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    触摸屏,所述触摸屏包括触敏表面和显示屏;A touch screen, the touch screen including a touch-sensitive surface and a display screen;
    一个或多个处理器;One or more processors;
    一个或多个存储器;One or more memories;
    一个或多个传感器;One or more sensors;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-16中任一项所述的容貌预测方法。And one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, and the one or more computer programs include instructions, when the instructions are executed by the electronic device When the time, the electronic device is caused to execute the appearance prediction method according to any one of claims 1-16.
  18. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-16中任一项所述的容貌预测方法。A computer-readable storage medium having instructions stored in the computer-readable storage medium, characterized in that, when the instructions run on an electronic device, the electronic device is caused to execute any one of claims 1-16 The appearance prediction method described in item.
  19. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-16中任一项所述的容貌预测方法。A computer program product containing instructions, characterized in that, when the computer program product runs on an electronic device, the electronic device is caused to execute the appearance prediction method according to any one of claims 1-16.
  20. 一种图形用户界面GUI,所述图形用户界面存储在电子设备中,所述电子设备包括触摸屏、存储器、处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,其特征在于,所述图形用户界面包括:A graphical user interface GUI, the graphical user interface is stored in an electronic device, the electronic device includes a touch screen, a memory, and a processor, and the processor is used to execute one or more computer programs stored in the memory, It is characterized in that the graphical user interface includes:
    显示在所述触摸屏上的第一GUI,所述第一GUI中包括容貌预测功能的按钮;A first GUI displayed on the touch screen, the first GUI including a button for a facial prediction function;
    响应于针对所述按钮的触摸事件,在所述触摸屏上显示第二GUI,所述第二GUI中包括用户的第一面部图像和第二面部图像,或者,所述第二GUI中包括用户的第二面部图像;In response to the touch event for the button, a second GUI is displayed on the touch screen, the second GUI includes the user's first facial image and the second facial image, or the second GUI includes the user ’S second facial image;
    其中,所述第一面部图像为所述用户真实的面部图像,所述第二面部图像为所述电子设备基于所述用户的健康数据和所述第一面部图像为所述用户预测出一段时间后的面部图像。Wherein, the first facial image is a real facial image of the user, and the second facial image is predicted by the electronic device for the user based on the health data of the user and the first facial image Face image after some time.
  21. 一种图形用户界面GUI,所述图形用户界面存储在电子设备中,所述电子设备包括触摸屏、存储器、处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,其特征在于,所述图形用户界面包括:A graphical user interface GUI, the graphical user interface is stored in an electronic device, the electronic device includes a touch screen, a memory, and a processor, and the processor is used to execute one or more computer programs stored in the memory, It is characterized in that the graphical user interface includes:
    显示在所述触摸屏上的第一GUI,所述第一GUI中包括容貌预测完成的通知消息;A first GUI displayed on the touch screen, the first GUI including a notification message that the appearance prediction is completed;
    响应于针对所述通知消息的触摸事件,在所述触摸屏上显示第二GUI,所述第二GUI中包括用户的第一面部图像和第二面部图像,或者,所述第二GUI中包括用户的第二面部图像;In response to the touch event for the notification message, a second GUI is displayed on the touch screen, the second GUI includes the first facial image and the second facial image of the user, or the second GUI includes The second facial image of the user;
    其中,所述第一面部图像为所述用户真实的面部图像,所述第二面部图像为所述电子设备基于所述用户的健康数据和所述第一面部图像为所述用户预测出一段时间后的面部图像。Wherein, the first facial image is a real facial image of the user, and the second facial image is predicted by the electronic device for the user based on the health data of the user and the first facial image Face image after some time.
PCT/CN2019/120085 2019-02-26 2019-11-22 Facial appearance prediction method and electronic device WO2020173152A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910142345.3A CN109994206A (en) 2019-02-26 2019-02-26 A kind of appearance prediction technique and electronic equipment
CN201910142345.3 2019-02-26

Publications (1)

Publication Number Publication Date
WO2020173152A1 true WO2020173152A1 (en) 2020-09-03

Family

ID=67130512

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120085 WO2020173152A1 (en) 2019-02-26 2019-11-22 Facial appearance prediction method and electronic device

Country Status (2)

Country Link
CN (1) CN109994206A (en)
WO (1) WO2020173152A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109994206A (en) * 2019-02-26 2019-07-09 华为技术有限公司 A kind of appearance prediction technique and electronic equipment
CN111627557A (en) * 2020-05-26 2020-09-04 闻泰通讯股份有限公司 Health condition feedback method, device, equipment and storage medium
CN111767676A (en) * 2020-06-30 2020-10-13 北京百度网讯科技有限公司 Method and device for predicting appearance change operation result
CN112464885A (en) * 2020-12-14 2021-03-09 上海交通大学 Image processing system for future change of facial color spots based on machine learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120322938A1 (en) * 2011-06-15 2012-12-20 Ling Tan Composition Of Secondary Amine Adducts, Amine Diluents and Polyisocyanates
CN103646245A (en) * 2013-12-18 2014-03-19 清华大学 Method for simulating child facial shape
WO2017191847A1 (en) * 2016-05-04 2017-11-09 理香 大熊 Future vision prediction device
CN108140110A (en) * 2015-09-22 2018-06-08 韩国科学技术研究院 Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device
CN108363964A (en) * 2018-01-29 2018-08-03 杭州美界科技有限公司 A kind of pretreated wrinkle of skin appraisal procedure and system
CN109994206A (en) * 2019-02-26 2019-07-09 华为技术有限公司 A kind of appearance prediction technique and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425138B (en) * 2008-11-18 2011-05-18 北京航空航天大学 Human face aging analogue method based on face super-resolution process
CN105787974B (en) * 2014-12-24 2018-12-25 中国科学院苏州纳米技术与纳米仿生研究所 Bionic human face aging model method for building up
CN108143392B (en) * 2017-12-06 2021-07-02 懿奈(上海)生物科技有限公司 Skin state detection method
CN108734127B (en) * 2018-05-21 2021-01-05 深圳市梦网科技发展有限公司 Age identification value adjusting method, age identification value adjusting device, age identification value adjusting equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120322938A1 (en) * 2011-06-15 2012-12-20 Ling Tan Composition Of Secondary Amine Adducts, Amine Diluents and Polyisocyanates
CN103646245A (en) * 2013-12-18 2014-03-19 清华大学 Method for simulating child facial shape
CN108140110A (en) * 2015-09-22 2018-06-08 韩国科学技术研究院 Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device
WO2017191847A1 (en) * 2016-05-04 2017-11-09 理香 大熊 Future vision prediction device
CN108363964A (en) * 2018-01-29 2018-08-03 杭州美界科技有限公司 A kind of pretreated wrinkle of skin appraisal procedure and system
CN109994206A (en) * 2019-02-26 2019-07-09 华为技术有限公司 A kind of appearance prediction technique and electronic equipment

Also Published As

Publication number Publication date
CN109994206A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
WO2020134891A1 (en) Photo previewing method for electronic device, graphical user interface and electronic device
CN110134316B (en) Model training method, emotion recognition method, and related device and equipment
WO2020173152A1 (en) Facial appearance prediction method and electronic device
CN109793498B (en) Skin detection method and electronic equipment
WO2021036585A1 (en) Flexible screen display method and electronic device
WO2021013132A1 (en) Input method and electronic device
WO2020029306A1 (en) Image capture method and electronic device
WO2021169394A1 (en) Depth-based human body image beautification method and electronic device
WO2022017261A1 (en) Image synthesis method and electronic device
WO2021052139A1 (en) Gesture input method and electronic device
WO2021213031A1 (en) Image synthesis method and related apparatus
WO2022100685A1 (en) Drawing command processing method and related device therefor
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN114466128A (en) Target user focus-following shooting method, electronic device and storage medium
WO2020042112A1 (en) Terminal and method for evaluating and testing ai task supporting capability of terminal
WO2023241209A9 (en) Desktop wallpaper configuration method and apparatus, electronic device and readable storage medium
CN114242037A (en) Virtual character generation method and device
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN113973189A (en) Display content switching method, device, terminal and storage medium
CN114115512A (en) Information display method, terminal device, and computer-readable storage medium
CN114816610A (en) Page classification method, page classification device and terminal equipment
WO2023273543A1 (en) Folder management method and apparatus
WO2022143921A1 (en) Image reconstruction method, and related apparatus and system
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
WO2021208677A1 (en) Eye bag detection method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19917200

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19917200

Country of ref document: EP

Kind code of ref document: A1