CN115599198A - Figure image display method, electronic device and storage medium - Google Patents

Figure image display method, electronic device and storage medium Download PDF

Info

Publication number
CN115599198A
CN115599198A CN202110717874.9A CN202110717874A CN115599198A CN 115599198 A CN115599198 A CN 115599198A CN 202110717874 A CN202110717874 A CN 202110717874A CN 115599198 A CN115599198 A CN 115599198A
Authority
CN
China
Prior art keywords
user
portrait
signal
interface
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110717874.9A
Other languages
Chinese (zh)
Inventor
曾恂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110717874.9A priority Critical patent/CN115599198A/en
Publication of CN115599198A publication Critical patent/CN115599198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The embodiment of the application provides a figure portrait display method, electronic equipment and a storage medium, which relate to the technical field of communication, and the method comprises the following steps: acquiring a physiological signal and/or an environmental signal of a user; generating a first portrait based on the physiological signal of the user and/or the environmental signal; displaying the first portrait. According to the method, dynamic display can be carried out on the intelligent watch through the character expressions according to the mood of the user, so that the desktop of the intelligent watch is more vivid, and the experience of the user can be improved.

Description

Figure image display method, electronic device and storage medium
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a person portrait display method, electronic equipment and a storage medium.
Background
With the development of information technology, the capabilities of the terminal are increasingly diversified, and the desktop form of the terminal is more and more colorful. The expression of the character is set as the desktop, and is one of the expression forms of the desktop of the terminal. At present, a terminal can display a certain static/dynamic expression, the static expression is preset by a user, the dynamic expression is preset by the user or randomly changed, the display modes of the two terminal desktops are poor in user viscosity, and user experience is poor.
Disclosure of Invention
The embodiment of the application provides a figure portrait display method, electronic equipment and a storage medium, so that a mode of dynamically displaying the expression of a user on a desktop of the equipment is provided, the expression of the desktop can be dynamically adjusted according to the mood of the user, the mood of the user can be reflected in real time, the desktop is more vivid, and the user experience can be improved.
In a first aspect, an embodiment of the present application provides a method for displaying a person image, which is applied to an electronic device, and includes:
acquiring a physiological signal and/or an environmental signal of a user; generating a first portrait based on a physiological signal and/or an environmental signal of a user; displaying the first portrait.
In the embodiment of the application, the mood of the user is identified by detecting the physiological signals and/or the environmental signals of the user in real time, and the mood is displayed on the display screen of the electronic equipment, so that the mood of the user can be reflected in real time according to the expression of the desktop dynamically adjusted by the mood of the user, the desktop is more vivid, and the experience of the user can be improved.
In one possible implementation, the physiological signal includes a heart rate or a sound signal of the user, and the environmental signal includes weather information or temperature information.
In the embodiment of the application, the character image of the user is embodied through the heart rate of the user, the sound signal, the weather information or the temperature information, and the character image can be more flexible.
In one possible implementation, the character representation includes expressions corresponding to physiological signals and/or environmental signals.
In the embodiment of the application, the portrait is represented through the expressions, the mood of a user can be visually embodied, the screen interface is more vivid, and therefore the experience of the user can be improved.
In one possible implementation manner, the portrait includes a color corresponding to the physiological signal and/or the environmental signal, and the color is a background color of the screen interface.
In the embodiment of the application, the character portrait is represented by the background color of the screen, so that the character portrait can be more simply embodied.
In one possible implementation, generating the first portrait based on the physiological signal and/or the environmental signal of the user includes:
determining a user mood or a user state based on a physiological signal and/or an environmental signal of the user;
a first portrait is generated based on a mood or status of a user.
In the embodiment of the application, the figure portrait of the user is determined according to the mood or the state of the user, and the figure portrait of the user can be generated more accurately.
In one possible implementation, a reminder is sent to the user based on the mood or state of the user.
In the embodiment of the application, the user can be reminded of the current mood or state by sending the reminder to the user, so that the user can adjust the mood based on the current mood or state, and the user experience can be improved.
In one possible implementation, displaying the first person portrait includes:
in response to the detected first operation of the user, the screen is lit and the first character representation is displayed.
In the embodiment of the application, the screen is lightened and the portrait of the user is displayed through the operation of the user, so that the power consumption of the electronic equipment can be saved.
In one possible implementation manner, the method further includes:
and responding to the detected second operation of the user, updating the currently displayed first portrait to obtain a second portrait, and displaying the second portrait, wherein the second portrait is different from the first portrait.
In the embodiment of the application, the figure portrait of the user is updated through user operation, so that the user can relieve the emotion of the user after seeing the updated figure portrait, and the user experience can be improved.
In one possible implementation, displaying the first person portrait includes:
and displaying the first portrait on a screen interface of the smart watch.
In the embodiment of the application, the figure portrait of the user is directly displayed through the intelligent watch, and the figure portrait of the user can be displayed more quickly.
In one possible implementation, displaying the first person portrait includes:
and sending a portrait display instruction to the second device, wherein the portrait display instruction is used for instructing the second device to display the first portrait on a screen interface of the second device.
In the embodiment of the application, the person portrait of the user is displayed through the second device, the person portrait of the user can be displayed under the condition that the smart watch does not have a display function, and therefore the problem that the person portrait of the user cannot be displayed can be avoided.
In a second aspect, an embodiment of the present application provides a person portrait display device, which is applied to a smart watch, and includes:
the acquisition module is used for acquiring physiological signals and/or environmental signals of a user;
a generating module for generating a first portrait based on a physiological signal and/or an environmental signal of a user;
and the display module is used for displaying the first human figure.
In one possible implementation, the physiological signal includes a heart rate or a sound signal of the user, and the environmental signal includes weather information or temperature information.
In one possible implementation, the person representation includes an expression corresponding to the physiological signal and/or the environmental signal.
In one possible implementation, the person image includes a color corresponding to the physiological signal and/or the environmental signal, and the color is a background color of the screen interface.
In one possible implementation manner, the generating module is further configured to determine a mood or a state of the user based on the physiological signal and/or the environmental signal of the user; a first portrait is generated based on a mood or status of a user.
In one possible implementation manner, the apparatus further includes:
and the reminding module is used for sending a reminder to the user based on the mood or the state of the user.
In one possible implementation manner, the display module is further configured to light up the screen and display the first person representation in response to the detected first operation of the user.
In one possible implementation manner, the apparatus further includes:
and the updating module is used for responding to the detected second operation of the user, updating the currently displayed first person portrait to obtain a second person portrait, and displaying the second person portrait, wherein the second person portrait is different from the first person portrait.
In one possible implementation manner, the display module is further configured to display the first person portrait on a screen interface of the smart watch.
In one possible implementation manner, the display module is further configured to send a portrait display instruction to the second device, where the portrait display instruction is used to instruct the second device to display the first portrait on a screen interface of the second device.
In a third aspect, an embodiment of the present application provides a smart watch, including:
a memory, said memory for storing computer program code, said computer program code including instructions, when said smart watch reads said instructions from said memory, to cause said smart watch to perform the steps of:
acquiring a physiological signal and/or an environmental signal of a user;
generating a first portrait based on a physiological signal and/or an environmental signal of a user;
displaying the first portrait.
In one possible implementation, the physiological signal includes a heart rate or a sound signal of the user, and the environmental signal includes weather information or temperature information.
In one possible implementation, the person representation includes an expression corresponding to the physiological signal and/or the environmental signal.
In one possible implementation, the person image includes a color corresponding to the physiological signal and/or the environmental signal, and the color is a background color of the screen interface.
In one possible implementation manner, when executed by the smart watch, the instruction causes the smart watch to perform the step of generating the first person portrait based on the physiological signal and/or the environmental signal of the user, including:
determining a user mood or a user state based on a physiological signal and/or an environmental signal of the user;
a first portrait is generated based on a user mood or a user status.
In one possible implementation manner, when the instruction is executed by the smart watch, the smart watch further executes the following steps:
a reminder is sent to the user based on the user mood or the user status.
In one possible implementation manner, when the instruction is executed by the smart watch, the step of displaying the first portrait by the smart watch includes:
in response to the detected first operation of the user, the screen is lit and the first character representation is displayed.
In one possible implementation manner, when the instruction is executed by the smart watch, the smart watch further executes the following steps:
and responding to the detected second operation of the user, updating the currently displayed first portrait to obtain a second portrait, and displaying the second portrait, wherein the second portrait is different from the first portrait.
In one possible implementation manner, when the instruction is executed by the smart watch, the step of displaying the first portrait by the smart watch includes:
and displaying the first portrait on a screen interface of the smart watch.
In one possible implementation manner, when the instruction is executed by the smart watch, the step of displaying the first portrait by the smart watch includes:
and sending a portrait display instruction to the second device, wherein the portrait display instruction is used for instructing the second device to display the first portrait on a screen interface of the second device.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program, which, when run on a computer, causes the computer to perform the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program, which is configured to perform the method according to the first aspect when the computer program is executed by a computer.
In a possible design, the program of the fifth aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
Drawings
Fig. 1 is a schematic view of an application scenario of an embodiment provided in the present application;
fig. 2 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method for displaying a person image according to an embodiment of the present disclosure;
FIGS. 4 a-7 b are schematic diagrams illustrating the display effect of a character image according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a display device for displaying a person image according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
With the development of information technology, the capabilities of the terminal are increasingly diversified, and the desktop form of the terminal is more and more colorful. The expression of the character is set as the desktop, and is one of the expression forms of the desktop of the terminal. At present, a terminal can display a certain static/dynamic expression, the static expression is preset by a user, the dynamic expression is preset by the user or randomly changed, the display modes of the two terminal desktops are poor in user viscosity, and user experience is poor.
Based on the above problem, the embodiment of the application provides a figure portrait display method, which can display the expression of a desktop according to dynamic adjustment equipment of the mood of a user, so that the mood of the user can be reflected in real time, the desktop is more vivid, and the user experience can be improved.
Referring now to fig. 1-7, a method for displaying a person representation according to an embodiment of the present disclosure is described, where the method for dynamically displaying a person representation can be applied to a first device 10, and the first device 10 can be a wearable device with a display screen, such as a smart watch, a smart bracelet, and the like. The embodiment of the present application does not specifically limit the specific form of the first apparatus 10. When the first device 10 has a display screen, the first device 10 may directly display the expression on a desktop, and may dynamically adjust the expression according to the mood of the user.
Alternatively, the first device 10 may not have a display capability, e.g., a display screen. When the first device 10 does not have the display capability, the first device can be linked with the second device 20 to realize the display of the user expression. For example, first device 10 may send a portrait display instruction to second device 20 instructing second device 20 to display a corresponding portrait. The second device 20 may be an electronic device with display capability, such as a mobile phone, a tablet, a PC, a smart screen, etc. Fig. 1 is a schematic view of an application scenario in which the first device 10 and the second device 20 are linked.
An exemplary electronic device provided in the following embodiments of the present application is first described below with reference to fig. 2. Fig. 2 shows a schematic structural diagram of an electronic device 100, which electronic device 100 may be the first device 10 described above.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, a heart rate sensor 180Q, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system. In the embodiment of the present application, the processor 110 may be configured to perform steps 301 to 305 shown in fig. 3 below.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, a charger, a flash, a camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of receiving a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. In this embodiment, the display screen 194 may be used to display various expressions, wherein the expressions may be used to represent the mood or status of the user.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a MicroSD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor. In the embodiment of the present application, the internal memory 121 may be used to store preset expressions, such as a relaxing expression, a calming expression, a frustrating expression, a sweating expression, a coughing expression, a sneezing expression, and the like. In addition, the internal memory 121 may be further configured to store a mapping table between pressure values and colors and between pressure values and expressions.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal to the microphone 170C by uttering a voice signal close to the microphone 170C through the mouth of the user. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on. In the embodiment of the present application, the microphone 170C can pick up the voice of the user, and thereby the user can recognize the state at that time, for example, the state is yawning, coughing, sneezing, or the like.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to save power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature. In the embodiment of the present application, the temperature sensor 180J may be used to detect the climate ambient temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human body pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so that the heart rate detection function is realized. In the embodiment of the application, the blood pressure pulsation signal of the user can be detected through the bone conduction sensor 180M, so that the heart rate of the user can be acquired.
The heart rate sensor 180Q may acquire a photoplethysmography (PPG) signal of the user, and may acquire the heart rate of the user by detecting the PPG signal.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards can be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
FIG. 3 is a flowchart illustrating an embodiment of a method for displaying a person image according to the present application, including:
in step 301, the first device 10 acquires a physiological signal and/or an environmental signal of a user.
In particular, the first device 10 may be a wearable device, such as a smart bracelet, a smart watch, smart glasses, a smart helmet, or the like. When the user wears the first device 10, the first device 10 may collect the physiological signal or the environmental signal of the user in real time.
The physiological signal may include a pulse signal, a PPG signal, and a sound signal. The pulse signal may be detected by the bone conduction sensor 180M, for example, by the pulsation of a pulse, and the bone conduction sensor 180M acquires the pulse signal by bone conduction. The PPG signal may be detected by the heart rate sensor 180Q, for example, by photoplethysmography (PPG). The sound signal may be acquired by the microphone 170C.
Further, the environmental signal may include weather information or temperature information. The weather information may include weather conditions such as sunny days, cloudy days, rainy days, and the like. The temperature information can be used to characterize the weather ambient temperature of the day.
Optionally step 302, the first device 10 converts the physiological signal of the user into a pressure value p of the user.
Optionally, after the first device 10 collects the physiological signal of the user, the physiological signal of the user may be converted into a pressure value p of the user. The pressure value p may be used to represent a mood of the user, for example, the mood may include joy, anger, sadness, and the like. In a specific implementation, the mood can be embodied in the form of an expression. It is understood that the moods, anger, sadness and the like are merely exemplary, and are not meant to limit the embodiments of the present application, and other moods may be included in some embodiments.
The above conversion process is described by taking the physiological signal of the user as the PPG signal as an example. In a particular implementation, the heart rate value may be obtained by PPG signal calculation. The heart rate value may then be converted into a pressure value p.
Optionally, the PPG signal may also be converted into a Heart Rate Variability (HRV) signal, from which a corresponding pressure value p is calculated.
Next, the above conversion process will be described by taking an audio signal as an example. In a specific implementation, when the voice signal of the user is acquired, the voice signal can be recognized, and thus it can be determined that the voice of the user is sigh, cough, sneeze or yawn. The sound of the user can be counted over a period of time, and the frequency of sighs, coughs, sneezes or yawns of the user can be counted, so that the pressure value p of the user can be determined according to the frequency.
Optionally, the obtained PPG signal and the sound signal may also be combined to jointly determine the pressure value p of the user.
In step 303, the first device 10 generates and displays a person representation based on the pressure value p, the physiological signal and/or the environmental signal.
Specifically, the first device 10 may first generate a person image based on the pressure value p or the physiological signal, and then detect whether the screen is lit, and if not, not display the person image; if the user operates (e.g., clicks on the screen, lifts the wrist, touches a button on the crown or around the smart watch), the first device 10 lights up the screen and displays the character image.
Taking the pressure value p as an example, after the first device 10 obtains the current pressure value p, a portrait may be displayed on the screen interface of the first device 10 according to the pressure value p. Wherein the character image may include an expression, which may be used to characterize the mood of the user at that time.
In a specific implementation, the mapping relationship between the pressure value p and the expression may be stored in advance in the memory of the first device 10. Illustratively, table 1 is a mapping table of pressure values p and expressions.
TABLE 1
p Expression of facial expressions
0-20 The current pressure is too light, the color temperature turns red, and the expression gives the passion.
21-40 The current pressure is relatively light, the color temperature is turned yellow, and the expression is slightly promoted.
41-60 The current pressure state is moderate, the color temperature is green, and the expression is kept stable.
61-80 The current pressure is relatively slightly greater and,the color temperature is turned green, and the expression is calm slightly.
81-100 The current pressure is very big, and the colour temperature turns blue, and the expression is peaceful and reassuring mood.
It should be noted that the range of the pressure value p and the corresponding expression are merely exemplary, and are not limited to the embodiment of the present application, and in some embodiments, other value ranges and other types of expressions may also be used.
Optionally, the figure may further include a color, wherein the color may be a background color of the screen interface of the first device 10. The color temperature through the screen interface of the first device 10 may characterize the mood of the user, for example, a cool tone may be used to characterize the user's poor mood and a warm tone may be used to characterize the user's better mood. In a specific implementation, the mapping relationship between the pressure value p and the color may also be stored in advance in the memory of the first device 10. Illustratively, table 2 is a mapping table of pressure values p and colors.
TABLE 2
p Colour(s)
[0,25) (R:255,G:10.2*p,B:0)
[25,50) (R:10.2*(50-p),G:255,B:0)
[50,75) (R:0,G:255,B:10.2*(75-p))
[75,100] (R:0,G10.2(100-p),B:255)
As shown in table 2, the color temperature changes from the warm color to the cold color as the value of the pressure value P increases.
Alternatively, the first device 10 may also send a portrait display instruction to the second device 20 after obtaining the above-mentioned pressure value p, and the portrait display instruction may be used to instruct the second device 20 to display the portrait of the user. For convenience of explanation, the first device 10 will be described as an example of displaying a person image of a user, but the display of the person image of the user on the screen interface of the first device 10 is not limited thereto.
A description will be given below of a manner of displaying a human figure based on the pressure value p, taking the first device 10 as an example of a display device. It is understood that the pressure value p may be obtained by converting the heart rate or the sound of the user.
If the pressure value p is obtained by translating the heart rate of the user, a portrait may be displayed on the screen interface of the first device 10 as follows.
The above-described manner of displaying a character image based on heart rate will now be described with reference to fig. 4 a-4 d.
As shown in fig. 4a, after the first device 10 obtains the heart rate of the user, the heart rate may be converted into a pressure value p, and if the pressure value p is smaller than a preset first pressure threshold, it indicates that the current pressure of the user is small, and at this time, the color temperature may be warm, that is, the background color of the screen interface of the first device 10 may be warm. Further, the screen interface of the first device 10 may display an interface 400, wherein the interface 400 may include a relaxing expression 401, which relaxing expression 401 may be used to characterize the user's relaxed mood at this time. On the intelligent watch interface, the information such as time, temperature, weather, step number, noise decibel value, sunrise/sunset time, heart rate, pressure value, user state and the like can also be displayed. The user status can be obtained by the pressure value p, specifically see table 1, and can also be obtained by the sound signal in step 301.
In fig. 4a, the character image is displayed by the animated face, and further, the character image can be displayed by the action of the animated figure of the whole human body.
As shown in fig. 4b, after the first device 10 obtains the heart rate of the user, the heart rate may be converted into a pressure value p, and if the pressure value p is greater than or equal to a preset first pressure threshold and smaller than a preset second pressure threshold, it indicates that the current pressure of the user is moderate, and at this time, the color temperature may be biased to neutral, that is, the background color of the screen interface of the first device 10 may be biased to neutral. Further, the screen interface of the first device 10 may display an interface 410, wherein the interface 410 may comprise a neutral expression 411, which neutral expression 411 may be used to characterize the user's moderate mood at the time.
As shown in fig. 4c, after the first device 10 obtains the heart rate of the user, the heart rate may be converted into a pressure value p, and if the pressure value p is greater than or equal to a preset second pressure threshold and smaller than a preset third pressure threshold, it indicates that the current pressure of the user is relatively large, and at this time, the color temperature may be relatively cool, that is, the background color of the screen interface of the first device 10 may be relatively cool. Additionally, the screen interface of the first device 10 may display an interface 420, wherein the interface 420 may include a frustrated expression 421, which frustrated expression 421 may be used to characterize the user's frustrated mood at this time due to the greater stress.
As shown in fig. 4d, after the first device 10 obtains the heart rate of the user, the heart rate may be converted into a pressure value p, and if the pressure value p is greater than or equal to a preset third pressure threshold, it indicates that the current pressure of the user is huge, that is, the heart rate of the user is very high at this time, the color temperature may be a cold color, that is, the background color of the screen interface of the first device 10 may be a cold color. In addition, the screen interface of the first device 10 may display an interface 430, wherein the interface 430 may include a refractory expression 431, and the refractory expression 431 may be used to represent the refractory mood of the user at the time due to the great stress.
If the pressure value p is converted from the user's voice, a portrait may be displayed on the screen interface of the first device 10 as follows. In particular implementations, the first device 10 may also detect in real time acoustic signals (e.g., sigh, yawn, cough, or sneeze) of the user via the microphone. When the first device 10 detects a voice signal of the user, the voice signal may be recognized, for example, as a yawn sound, a cough sound, or a sneeze sound, and a person image corresponding to the voice signal may be displayed on the screen interface of the first device 10. Wherein the representation of the person may be used to characterize the current state of the user (e.g., sigh, yawn, coughing state or sneezing state).
It should be understood that the above sigh, yawning, coughing or sneezing sound is only an exemplary illustration and is not a limitation of the embodiment of the present application, and in some embodiments, other sound signals may be included, that is, a person image corresponding to the sound signal may be displayed on the desktop of the first device 10 according to the detected other sound signals. The above-mentioned manner of recognizing the sound signal can be realized by a speech recognition model, and the specific algorithm of the speech recognition model is not particularly limited in this application.
In a specific implementation, the first device 10 may pre-store, in the memory, the audio features of the sound signal of the user wearing the first device 10, and after the first device 10 collects the current sound signal, may identify the audio features of the current sound signal, and compare the audio features of the current sound signal with the audio features of the sound signal of the pre-stored user, so as to determine whether the sound signal is a sound emitted by the user wearing the first device 10, so as to avoid misjudgment of other users. It should be understood that, the above-mentioned pre-storing of the audio characteristics of the sound signal of the user wearing the first device 10 is a preferred embodiment, and does not constitute a limitation to the embodiment of the present application, and in some embodiments, the current sound signal may be collected, the current sound signal may be directly identified according to the current sound signal, the state of the user may be obtained, and the person representation corresponding to the above state may be displayed on the screen interface of the first device 10, that is, the sound signal does not need to be verified.
The above-described manner of displaying a person image based on sound will now be described with reference to fig. 5a to 5 e.
As shown in fig. 5a, when the first device 10 detects a sound signal, the detected sound signal may be verified to determine that the sound signal is emitted by the user wearing the first device 10. If it is determined that the sound signal is emitted by the user wearing the first device 10, the sound signal may be further identified to determine the state of the user corresponding to the sound signal. Taking the cough of the user as an example, when the first device 10 recognizes the sound signal and determines that the user is coughing, an interface 500 may be displayed on the screen interface of the first device 10, where the interface 500 includes a cough expression 501, and the cough expression 501 may be used to characterize the motion of the user at the time of coughing.
As shown in fig. 5b, for example, when the first device 10 recognizes that the user sneezes, and determines that the user is sneezing, an interface 510 may be displayed on the screen interface of the first device 10, wherein the interface 510 includes a sneezing expression 511, and the sneezing expression 511 may be used to characterize the sneezing action of the user. Alternatively, the frequency of sneezing of the user within the preset time period may be counted, and if the counted frequency of sneezing of the user exceeds the preset first frequency threshold, it may be indicated that the user is ill, and at this time, an uncomfortable expression as shown in fig. 4d may be displayed on the screen interface of the first device 10.
As shown in fig. 5c, for example, when the first device 10 recognizes the sound signal and determines that the user is sighing, an interface 520 may be displayed on a screen interface of the first device 10, where the interface 520 includes a sigh expression 521, and the sigh expression 521 may be used to characterize an action of the user at this time. Alternatively, the frequency of the user sighs within a preset time period may be counted, and if the counted frequency of the user sighs exceeds a preset second frequency threshold, it may be indicated that the mood of the user is depressed, and at this time, a depressed expression as shown in fig. 4c may be displayed on the screen interface of the first device 10.
As shown in fig. 5d, taking the yawning of the user as an example, when the first device 10 recognizes the sound signal and determines that the user is yawning, an interface 530 may be displayed on the screen interface of the first device 10, where the interface 530 includes a yawning expression 531, and the yawning expression 531 may be used to represent the action of the user yawning at this time. Optionally, the frequency of the user's yawning in a preset time period may be counted, and if the counted frequency of the user's yawning exceeds a preset third frequency threshold, it may be indicated that the state of the user is tired, at this time, an interface 540 shown in fig. 5e may be displayed on the screen interface of the first device 10, where the interface 540 includes a tired expression 541, and the tired expression 541 may be used to represent the tired state of the user at this time.
Optionally, in step 303, the physiological signal acquired in step 301 may be directly converted into a person image, specifically, a corresponding relationship between the person image and the physiological signal is established, and after the physiological signal is acquired, the first device 10 searches for the person image through the stored corresponding relationship and displays the person image, where the person image may include an expression and/or a color.
In addition to displaying the character representation on the screen interface of the first device 10 by means of the pressure value p or the physiological signal, the character representation may also be displayed on the screen interface of the first device 10 by means of the weather and/or the temperature in step 303. Next, a description will be given below of a manner of displaying a character image according to weather and/or temperature, taking the first device 10 as an example of a display device.
Specifically, the first device 10 may also acquire current weather information in real time. The weather information may be obtained through a weather forecast function module in the first device 10. The current weather information can also be acquired from the cloud. The embodiment of the present application does not specially limit the manner of acquiring the weather information.
After the first device 10 acquires the weather information, the mood of the user may be determined according to the weather information, for example, if the current weather is a sunny day, the mood of the user may be happy; if the current weather is cloudy, the mood of the user can be calm; if the current weather is rainy, the mood of the user may be hurry, etc. It is understood that the above examples of the mapping relationship between weather and mood are only exemplary, and do not constitute a limitation to the embodiments of the present application, and in some embodiments, other mapping relationships between weather and mood may also be included.
After the first device 10 determines the mood of the user according to the current weather information, the first device 10 may display a corresponding portrait (e.g., an emoticon) on the screen interface of the first device 10 according to the mood of the user.
For example, when the first device 10 detects that the current weather is sunny, a relaxing expression as shown in fig. 4a may be displayed on the screen interface of the first device 10 to represent the pleasant mood of the user at this time.
When the first device 10 detects that the current weather is cloudy, a calm expression as shown in fig. 4b may be displayed on the screen interface of the first device 10 to represent the user's calm mood at this time.
When the first device 10 detects that the current weather is rainy, a depressed expression as shown in fig. 4c may be displayed on the screen interface of the first device 10 to characterize the user's mood for depression at this time.
Next, the manner of displaying a human image based on the temperature will be described with reference to fig. 6a to 6 b.
Optionally, the first device 10 may further obtain current temperature information, where the temperature information may be obtained from a weather forecast function module in the first device 10, or may be obtained from a temperature sensor in the first device 10. The embodiment of the present application does not specifically limit the manner of acquiring the temperature information.
After the first device 10 acquires the temperature information, the state of the user may be determined according to the temperature information. For example, if the current temperature is lower than 10 degrees celsius, it may be determined that the weather is cold, and at this time, the state of the user may be a trembling state; if the current temperature is higher than 30 degrees celsius, it may be determined that the weather is hot, and at this time, the state of the user may be a sweating state.
The first device 10 may then display a representation of the person at the on-screen interface of the first device 10 based on the state of the user, which may be used to characterize the state of the user.
As shown in fig. 6a, when the first device 10 detects that the current temperature is lower than the preset first temperature threshold (for example, lower than 10 degrees celsius), an interface 600 may be displayed on the screen interface of the first device 10, where the interface 600 includes a shivering expression 601, and the shivering expression is used to represent a state that the user is shivering due to cold at this time.
As shown in fig. 6b, when the first device 10 detects that the current temperature is higher than the preset second temperature threshold (for example, higher than 30 degrees celsius), an interface 610 may be displayed on the screen interface of the first device 10, wherein the interface 610 includes a sweating expression 611, and the sweating expression 611 is used for representing the state that the user sweats due to heat.
It will be appreciated that the weather information or temperature information may be used alone as a factor in the generation of the person representation or may be used in conjunction with the physiological signal factor obtained in step 301 to determine the person representation displayed on the screen.
Optionally step 304, the first device 10 alerts the user based on the current mood or current status of the user.
Optionally, the first device 10 may also detect a current mood or a current state of the user, and may remind the user according to the current mood or the current state of the user.
Illustratively, when the user is performing intense exercise, or the user is under great stress, the user's heart rate may become very high. If first equipment 10 detects that the heart rate of the user is very high, the user can be reminded so that the user can acquire the state of the user, and therefore pressure can be released, or the amount of exercise can be reduced, and further adverse effects on the user caused by the excessively high heart rate can be avoided. The reminding mode can be one or more of voice prompt, vibration or light. The embodiment of the present application does not specially limit the manner of the above-mentioned reminding.
If the current pressure value p exceeds the first pressure threshold, the person portrait in step 303 may be directly displayed on the screen, so that the user can know the current status of the user in time.
It is to be understood that the above-mentioned scenario of performing a reminder according to the heart rate of the user is merely an exemplary illustration, and does not constitute a limitation to the embodiments of the present application, and in some embodiments, the reminder may also be performed by detecting other moods or states of the user.
Optional step 305, the first device 10 updates the displayed character image in response to the detected user operation.
Alternatively, the first device 10 may also detect user operations, such as clicking, raising the hand, and turning the wrist. It is understood that the clicking operation may be a user clicking on a display screen of the first device 10, the raising operation may be a user raising a hand wearing the first device 10, and the wrist turning operation may be a wrist turning operation after the raising operation to view display contents of a screen interface of the first device 10. In a specific implementation, the click operation may be obtained by a touch screen of a display of the first device 10, and the hand raising operation and the wrist rotation operation are obtained by an acceleration sensor in the first device 10, for example, when the first device 10 detects an acceleration in a vertical direction, it may be determined that the user is raising the hand; when the first device 10 detects acceleration of rotation in the horizontal direction, it can be determined that the user is wrist-turning.
In response to a detected user action (e.g., turning the wrist), the first device 10 may update the currently displayed user representation, where the updated user representation may be a mood that is opposite to the mood corresponding to the current user representation. For example, if the expression of the user portrait currently displayed on the screen interface of the first device 10 is depressed, the user is stressed and has a bad mood. Therefore, when the portrait is updated, the portrait with an expression of happy character can be displayed on the screen interface of the first device 10, thereby enabling the user to relax the mood after seeing the happy expression.
After the figure image is updated, reminding can be performed in cooperation with other modes: such as shaking, playing audio prompts, etc.
The above-described manner of updating the person image will now be described with reference to fig. 7a and 7 b.
FIG. 7a is a schematic diagram illustrating the effect of updating the portrait by clicking the screen by the user. As shown in fig. 7a, the screen of the first device 10 is in the off-screen state, and at this time, the user clicks the display screen of the first device 10 to light up the screen of the first device 10, so that the interface 700 can be obtained. The interface 700 is a character image of the user currently displayed on the screen interface of the first device 10, and referring to the interface 700, the interface 700 includes a frustrated expression 701 for representing the frustrated mood of the user at the moment, which indicates that the stress of the user at the moment is huge. Then, after receiving the click operation of the user, the first device 10 may start a timer, and after the preset first time period, the interface 700 is updated to the interface 710, and with reference to the interface 710, the interface 710 includes a relaxing expression 711, so that the user may feel a relaxed mood after seeing the relaxing expression. The preset first duration may be preset in the first device 10, and for example, the preset first duration may be 1s. It is understood that the above-mentioned value of the preset first time period is only an exemplary value, and does not constitute a limitation to the embodiments of the present application, and in some embodiments, other values may be also used.
Fig. 7b is a schematic diagram illustrating the effect of the user raising hands to update the portrait. As shown in fig. 7b, the screen of the first device 10 is in the off-screen state, and when the user raises his hand, the first device 10 detects that the user raises his hand through the acceleration sensor, lights up the screen, and displays an interface 720 on the screen interface. Referring to the interface 720, the interface 720 includes a frustrated expression 721 that characterizes the user's mood that is frustrated at that time, indicating that the user is at a great stress at that time. Next, the user turns his wrist after raising his hand in order to view the display content of the screen interface of the first device 10. The first device 10 detects that the user rotates his wrist through the acceleration sensor, the interface 720 is updated to the interface 730, and the interface 730 is referred to the interface 730, and the interface 730 contains the relaxing expression 731, so that the user can feel the happy expression and can feel the mood.
Optionally, in step 302, the first device 10 may send the acquired physiological signal to the second device 20, process the physiological signal by the second device 20, and return the processing result to the first device 10.
Optionally, after a period of time, for example, after a day, the portrait corresponding to each time period of the day may be displayed to the user as a short video, so that the user may more conveniently know the state of the user in the day.
Fig. 8 is a schematic structural view of an embodiment of a human image display apparatus according to the present invention, as shown in fig. 8, the human image display apparatus 80 applied to a first device may include: an acquisition module 81, a generation module 82 and a display module 83; wherein the content of the first and second substances,
an obtaining module 81, configured to obtain a physiological signal and/or an environmental signal of a user;
a generating module 82 for generating a first portrait based on a physiological signal and/or an environmental signal of a user;
and the display module 83 is used for displaying the first human figure.
In one possible implementation, the physiological signal includes a heart rate or a sound signal of the user, and the environmental signal includes weather information or temperature information.
In one possible implementation, the person representation includes an expression corresponding to the physiological signal and/or the environmental signal.
In one possible implementation, the person image includes a color corresponding to the physiological signal and/or the environmental signal, and the color is a background color of the screen interface.
In one possible implementation manner, the generating module 82 is further configured to determine a mood or a state of the user based on the physiological signal and/or the environmental signal of the user; a first portrait is generated based on a user mood or a user status.
In one possible implementation manner, the apparatus 80 further includes:
and a reminding module 84 for sending a reminder to the user based on the mood or the status of the user.
In one possible implementation manner, the display module 83 is further configured to light up the screen and display the first person representation in response to the detected first operation of the user.
In one possible implementation manner, the apparatus 80 further includes:
and the updating module 85 is used for responding to the detected second operation of the user, updating the currently displayed first person portrait to obtain a second person portrait, and displaying the second person portrait, wherein the second person portrait is different from the first person portrait.
In one possible implementation manner, the display module 83 is further configured to display the first person portrait on a screen interface of the first device.
In one possible implementation manner, the display module 83 is further configured to send a portrait display instruction to the second device, where the portrait display instruction is used to instruct the second device to display the first portrait on a screen interface of the second device.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
It is to be understood that the electronic device 100 and the like described above include corresponding hardware structures and/or software modules for performing the respective functions in order to realize the functions described above. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
In the embodiment of the present application, the electronic device 100 and the like may be divided into functional modules according to the method example, for example, each functional module may be divided for each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Through the description of the foregoing embodiments, it will be clear to those skilled in the art that, for convenience and simplicity of description, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the apparatus may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program code, such as flash memory, removable hard drive, read-only memory, random-access memory, magnetic or optical disk, etc.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A portrait display method is applied to a smart watch, and is characterized by comprising the following steps:
acquiring a physiological signal and/or an environmental signal of a user;
generating a first portrait based on the physiological signal of the user and/or the environmental signal;
displaying the first portrait.
2. The method of claim 1, wherein the user's physiological signal comprises a user heart rate or sound signal and the environmental signal comprises weather information or temperature information.
3. The method of claim 1, wherein the character representation includes an expression corresponding to a physiological signal of the user and/or the environmental signal.
4. The method of any of claims 1-3, wherein the representation of the person includes a color corresponding to a physiological signal of the user and/or the environmental signal, the color being a background color of the screen interface.
5. The method according to any of claims 1-4, wherein the generating a first portrait based on the physiological signal of the user and/or the environmental signal comprises:
determining a user mood or a user state based on the physiological signal and/or the environmental signal of the user;
a first portrait is generated based on the user mood or the user status.
6. The method of claim 5, further comprising:
and sending a prompt to the user based on the user mood or the user state.
7. The method of any of claims 1-6, wherein the displaying the first character representation includes:
in response to the detected first operation of the user, a screen is lit and the first character representation is displayed.
8. The method according to any one of claims 1-7, further comprising:
and responding to the detected second operation of the user, updating the currently displayed first person portrait to obtain a second person portrait, and displaying the second person portrait, wherein the second person portrait is different from the first person portrait.
9. The method of any of claims 1-8, wherein the displaying the first character representation includes:
and displaying the first portrait on a screen interface of the smart watch.
10. The method of any of claims 1-9, wherein the displaying the first portrait includes:
and sending a portrait display instruction to a second device, wherein the portrait display instruction is used for instructing the second device to display the first portrait on a screen interface of the second device.
11. A smart watch, comprising: a memory for storing computer program code, the computer program code comprising instructions that, when read from the memory by the smart watch, cause the smart watch to perform the method of any of claims 1-10.
12. A computer-readable storage medium comprising computer instructions that, when executed on the smart watch, cause the smart watch to perform the method of any of claims 1-10.
13. A computer program product, characterized in that, when the computer program product is run on a computer, it causes the computer to perform the method according to any of claims 1-10.
CN202110717874.9A 2021-06-28 2021-06-28 Figure image display method, electronic device and storage medium Pending CN115599198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110717874.9A CN115599198A (en) 2021-06-28 2021-06-28 Figure image display method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110717874.9A CN115599198A (en) 2021-06-28 2021-06-28 Figure image display method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115599198A true CN115599198A (en) 2023-01-13

Family

ID=84841160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110717874.9A Pending CN115599198A (en) 2021-06-28 2021-06-28 Figure image display method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115599198A (en)

Similar Documents

Publication Publication Date Title
CN110458902B (en) 3D illumination estimation method and electronic equipment
WO2022193989A1 (en) Operation method and apparatus for electronic device and electronic device
CN111742539B (en) Voice control command generation method and terminal
CN110727380A (en) Message reminding method and electronic equipment
CN111202955A (en) Motion data processing method and electronic equipment
WO2021169515A1 (en) Method for data exchange between devices, and related device
CN113552937A (en) Display control method and wearable device
CN111835907A (en) Method, equipment and system for switching service across electronic equipment
CN113676339B (en) Multicast method, device, terminal equipment and computer readable storage medium
CN113572956A (en) Focusing method and related equipment
CN113467735A (en) Image adjusting method, electronic device and storage medium
CN114095602B (en) Index display method, electronic device and computer readable storage medium
CN113645622A (en) Device authentication method, electronic device, and storage medium
CN115657992B (en) Screen display method, device, equipment and storage medium
CN115665632A (en) Audio circuit, related device and control method
WO2022206825A1 (en) Method and system for adjusting volume, and electronic device
CN112241194A (en) Folding screen lighting method and device
CN113467747B (en) Volume adjusting method, electronic device and storage medium
CN115022807A (en) Express delivery information reminding method and electronic equipment
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN114661258A (en) Adaptive display method, electronic device, and storage medium
CN115599198A (en) Figure image display method, electronic device and storage medium
CN113391735A (en) Display form adjusting method and device, electronic equipment and storage medium
CN113867520A (en) Device control method, electronic device, and computer-readable storage medium
CN114120987A (en) Voice awakening method, electronic equipment and chip system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination