WO2019127523A1 - 一种用户情绪显示方法、系统及用户情绪显示设备 - Google Patents

一种用户情绪显示方法、系统及用户情绪显示设备 Download PDF

Info

Publication number
WO2019127523A1
WO2019127523A1 PCT/CN2017/120254 CN2017120254W WO2019127523A1 WO 2019127523 A1 WO2019127523 A1 WO 2019127523A1 CN 2017120254 W CN2017120254 W CN 2017120254W WO 2019127523 A1 WO2019127523 A1 WO 2019127523A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
emotion
display
respiratory
target
Prior art date
Application number
PCT/CN2017/120254
Other languages
English (en)
French (fr)
Inventor
龚梅军
梁杰
范欣薇
孟亚斌
刘洪涛
Original Assignee
深圳和而泰数据资源与云技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳和而泰数据资源与云技术有限公司 filed Critical 深圳和而泰数据资源与云技术有限公司
Priority to PCT/CN2017/120254 priority Critical patent/WO2019127523A1/zh
Priority to CN201780009005.2A priority patent/CN108702523B/zh
Publication of WO2019127523A1 publication Critical patent/WO2019127523A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4888Data services, e.g. news ticker for displaying teletext characters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • the present application relates to the field of video live broadcast technology, and in particular, to a user emotion display method, a user emotion display device, and a user emotion display system to which the user emotion display method is applied.
  • live webcasting such as live broadcast of sports events or live broadcast of cable video
  • users can watch favorite direct programs anytime, anywhere with wireless network data transmission.
  • Network video live broadcast is a high-end form of video media. From the perspective of information dissemination, live broadcast allows users to connect to the live broadcast site in real time, providing a true and direct video viewing experience. Because of the authenticity, the unpredictability of live programs is very attractive to users, giving users imagination and surprises. And its powerful interactivity also brings the fans and anchors closer together.
  • the present application provides a user emotion display method, a system, and a user emotion display device for acquiring the user's biological shock information through the wearable device, and extracting the heart impact information from the biological earthquake information (Ballistocardiography). , BCG), extracting the heart impact feature information from the heart impact information, determining the current true emotional degree of the user based on the heart impact feature information query, providing a real video experience feedback and interaction basis for the live broadcast platform, etc., improving user stickiness and Improve the user experience.
  • BCG biological earthquake information
  • the embodiment of the present application provides a user emotion display method, where the user emotion display method is applied to a user emotion display system, and the user emotion display system includes a wearable device and a display terminal, and the method includes:
  • the target emotion degree is a first emotion degree corresponding to the heart impact feature information.
  • the method further includes:
  • the target mood level is corrected based on the first mood level and the second mood level.
  • the method further includes:
  • the target mood level is corrected using the third degree of emotion.
  • the extracting cardiac impact information from the biological seismic information comprises:
  • the causing the display terminal to display the target mood level in real time comprises:
  • the received text or animation corresponding to the target emotion degree is scrolled according to the time stamp.
  • the embodiment of the present application further provides a user emotion display device, where the device includes:
  • a biological body vibration information acquiring unit configured to acquire biological body earthquake information collected by the wearable device
  • a heart impact information extracting unit configured to extract heart impact information from the biological shock information, and establish a heart impact information frame according to the heart impact information
  • a heart impact feature information extracting unit configured to perform frequency domain analysis on each heart impact information frame in the heart impact information frame to extract heart impact feature information of the heart impact information
  • the target emotion degree determining unit is configured to query the heart impact feature information and the emotion degree comparison library according to the heart impact feature information, determine the target emotion degree, and send the target emotion degree to the display terminal, so that the display terminal displays in real time.
  • the target mood level, the target mood level being a first mood level corresponding to the heart impact feature information.
  • the apparatus further includes:
  • a breathing information extracting unit configured to extract breathing information from the biological body earthquake information, and establish a breathing information frame according to the breathing information
  • a second emotion determination unit configured to extract respiratory feature information of the respiratory information for each respiratory information frame in the respiratory information frame, and query a respiratory feature information and a mood comparison library according to the respiratory feature information, and determine a second emotional degree corresponding to the respiratory characteristic information;
  • a first correcting unit configured to correct the target emotional degree based on the first emotional degree and the second emotional degree.
  • the apparatus further includes:
  • a body motion information extracting unit configured to extract body motion information from the biological body motion information, and establish a body motion information frame according to the body motion information
  • a third emotion degree determining unit configured to extract body motion feature information of the body motion information for each body motion information frame in the body motion information frame, and query body motion feature information and emotion degree according to the body motion feature information Determining, by the library, a third emotion degree corresponding to the body motion characteristic information;
  • a second correcting unit configured to correct the target emotional degree by using the third emotional degree.
  • the cardiac impact information extracting unit extracts cardiac shock information from the biological shock information, including:
  • the target emotion degree determining unit causes the display terminal to display the target mood level in real time, including:
  • the received text or animation corresponding to the target emotion degree is scrolled according to the time stamp.
  • the embodiment of the present application further provides a user emotion display device, where the user emotion display device includes: a processor, a memory, a communication interface, and a bus; the processor, the memory, and the communication interface pass through The bus is connected and completes communication with each other; the memory stores executable program code; the processor runs a program corresponding to the executable program code by reading executable program code stored in the memory to The user emotion display method as described above is performed.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer storage medium stores a computer program, where the computer program includes program instructions, when the program instructions are executed by a processor, The processor executes the user emotion display method as described above.
  • the embodiment of the present application further provides a user emotion display system, including a wearable device, a display terminal, and a cloud server, wherein the wearable device is connected to the display terminal, and the display terminal is connected to the cloud server;
  • the wearable device is configured to collect biometric information of a user
  • the display terminal includes a first processor and a first memory, the first processor is respectively connected to the first memory and the cloud server, and the first memory is stored and executable by the first processor The instructions are executed by the first processor to cause the first processor to: acquire biometric information of a user collected by the wearable device, and forward the biometric information to The cloud server;
  • the cloud server includes a second processor and a second memory, the second processor is respectively connected to the second memory and the display terminal, and the second memory is stored and executable by the second processor An instruction executed by the second processor to cause the second processor to: receive the biological shock information forwarded by the display terminal; and extract a heart impact from the biological seismic information And generating a heart impact information frame according to the heart impact information; performing frequency domain analysis on each heart impact information frame in the heart impact information frame to extract heart impact characteristic information of the heart impact information; according to the heart impact The feature information queries the heart impact feature information and the emotion degree comparison library, determines the target emotion degree, and sends the target emotion degree to the display terminal, where the target emotion degree is the first emotion corresponding to the heart impact feature information degree;
  • the first processor is further configured to perform a display that controls the display of the display terminal to display the target sentiment in real time.
  • the second processor is further configured to:
  • the target mood level is corrected based on the first mood level and the second mood level.
  • the second processor is further configured to:
  • the target mood level is corrected using the third degree of emotion.
  • the second processor performs extracting cardiac impulse information from the biological seismic information, including:
  • the second processor is further configured to: group users according to the requested live channel;
  • the displaying, by the first processor, the display of the target emotional degree in real time by the display of the display terminal includes: acquiring target emotional degrees of all users under the same live channel, and recording a time stamp corresponding to the target emotional degree; Setting a video window and a scrolling display window in a live broadcast interface displayed by the display terminal; and displaying, in the scroll display window, a text corresponding to the target emotion level received by the display of the display terminal according to the time stamp Figure.
  • the embodiment of the present application further provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when When the program instructions are executed by the cloud server, the cloud server is caused to execute the user emotion display method as described above.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium, where the computer-readable storage medium stores computer executable instructions for causing a cloud server to perform the above The user emotion display method.
  • the utility model has the beneficial effects that the user emotion display method and system and the user emotion display device provided by the embodiments of the present application install a body motion sensor, such as an acceleration sensor, a gyroscope, a piezoresistive sensor or a bioelectrode, on the wearable device.
  • a body motion sensor such as an acceleration sensor, a gyroscope, a piezoresistive sensor or a bioelectrode
  • the heart impact feature information is extracted from the impact information, and the current emotional degree of the user wearing the wearable device is determined based on the heart impact feature information query, providing a real video experience feedback and interaction basis for the live broadcast platform, improving user stickiness and improving user experience.
  • the live broadcast interface of the live broadcast platform is more entertaining, enjoyable, interactive, and more intense.
  • the user emotion display method, system, and user emotion display device provided by the embodiments of the present application may collect, analyze, and organize a number of real-world biological shock information collected by the body shock sensor and the emotional big data reflecting the user's true excitability.
  • the emotional text or dynamic image generated by the live broadcast user generates reference information, and optimizes the live broadcast system based on the reference information, such as doing a live broadcast user crowd program classification or helping the live broadcast platform to refine the service content.
  • FIG. 1 is a system framework diagram of a user emotion display system provided by an embodiment of the present application.
  • FIG. 2 is a hardware structural diagram of a wearable device of a user emotion display system according to an embodiment of the present application
  • FIG. 3 is a block diagram of a display terminal of a user emotion display system according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a live broadcast interface of a display terminal of a user emotion display system according to an embodiment of the present application
  • FIG. 5 is a block diagram of a cloud server of a user emotion display system according to an embodiment of the present application.
  • FIG. 6 is a main flowchart of a user emotion display method provided by an embodiment of the present application.
  • FIG. 7 is another flowchart of a method for displaying user emotions according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a user emotion display apparatus according to an embodiment of the present application.
  • FIG. 9 is a structural diagram of hardware of a user emotion display device according to an embodiment of the present application.
  • the user emotion display method and system provided by the embodiment of the present invention relate to a wearable device, a display terminal, and a cloud server.
  • the wearable device is wirelessly connected or wired to the display terminal, and the display terminal and the cloud server perform data through a wired or wireless network. Communication.
  • a body motion sensor such as an acceleration sensor, a gyroscope, a piezoresistive sensor, or a bioelectrode, is mounted on the wearable device, and the wearable device acquires biometric information of the user (eg, acquiring the user's living body based on the time domain).
  • the display terminal connected to the wearable device forwards the acquired biological earthquake information to the cloud server, and the cloud server extracts the heart impact information from the biological earthquake information, and the heart impact information is a widely studied organism.
  • Body shock information And extracting the heart impact feature information from the heart impact information, and determining, according to the heart impact feature information query, the current true emotional degree of the user wearing the wearable device.
  • the display terminal requests live data
  • the live content is played in the live interface of the display terminal
  • the text or the dynamic image representing all the users of the same live channel is displayed in a synchronous manner to provide a real video experience for the live platform. Feedback and interaction foundation to enhance user stickiness and enhance user experience.
  • FIG. 1 is a user emotion display system provided by an embodiment of the present application.
  • the user emotion display system includes a number of wearable devices, a plurality of display terminals, and a cloud server 300.
  • the plurality of wearable devices include the wearable device 100-1, . . . , the wearable device 100-n.
  • the plurality of display terminals include display terminals 200-1, . . ., display terminals 200-n.
  • the wearable device 100-1 is connected to the corresponding display terminal 200-1.
  • the wearable device 100-n is connected to the corresponding display terminal 200-n.
  • the display terminals 200-1, ..., the display terminals 200-n are all connected to the cloud server 300, and the display terminals 200-1, ..., the display terminals 200-n can receive the live video data of the cloud server 300.
  • the cloud server 300 is also a server of the live broadcast platform, and the cloud server 300 can also provide live video stream data.
  • the wearable device 100-1 and the display terminal 200-1 will be described below as an example.
  • the wearable device 100-1 in the embodiment of the present application is provided with a body motion sensor 50 in order to acquire the biological body vibration information of the user.
  • the body motion sensor 50 may be an electrode or a sensor that is in direct contact with the human body, or may be a body motion sensor that does not directly contact the human body.
  • electrodes or sensors that directly contact the human body are currently in common use. When measuring heart rate, it is necessary to use an electrode or a sensor to directly contact the human body, which may cause certain constraints on the human body.
  • a body motion sensor that does not directly contact the human body can obtain a weak vibration signal generated by the heart beat to the human body, that is, a biological shock information, without directly contacting the human body. Then, by performing signal analysis on the biological vibration information, it is possible to achieve an effect of not measuring the heart rate without directly contacting the human body.
  • the body motion sensor 50 may also employ a piezoresistive sensor. When the heart pumps blood outward, the body generates a reaction force opposite to the force that causes the blood to flow, and the reaction force can pass sensitive on the body surface. The force sensor is measured.
  • the body motion sensor 50 can also be a sensor made of polyvinylidene fluoride, a gravity sensor, a fabric electrode, a displacement sensor, a piezoelectric cable, a photoelectric sensor, or a bioelectric electrode.
  • the bioelectric electrode may be a silver chloride electrode (Ag/AgCl/Cl-), which can be used for detecting basic biological earthquake information, and then converted into human by heart rate variability (HRV) analysis. Emotional, that is, the degree of excitement.
  • HRV heart rate variability
  • heart rate variability refers to the slight difference or slight fluctuation between the heartbeat interval, that is, the instantaneous heart rate.
  • Heart rate variability analysis is to convert small changes in heart beat into waveforms for analysis, to visualize the emergency response of human autonomic nerves to stress, and to confirm human health status or mental and physiological stability in real time.
  • the wearable device 100-1 includes a wearable device processor 10 with a memory, a voice input component 20, an acousto-optic/vibration indicating component 30, a wireless communication module 40, and a body motion sensor 50.
  • the wearable device processor 10 is respectively connected to the voice input component 20, the acousto-optic/vibration indicating component 30, the wireless communication module 40, and the body motion sensor 50, and the connection may be a communication connection.
  • the wearable device 100-1 communicates with the wirelessly connected display terminal 200-1 through the wireless communication module 40.
  • the structure of the other wearable device 100-2 to the wearable device 100-n is the same as that of the wearable device 100-1, and therefore, details are not described herein.
  • the body motion sensor 50 may be an embedded acceleration sensor or a gyro sensor, and sample the weak information of the human body vibration, that is, the biological vibration information. For example, basic information such as heart impact motion map, respiratory motion, and body motion is sampled based on the time domain.
  • the wearable device 100-1 transmits the sampling data to the connected display terminal 200-1.
  • the body motion sensor 50 transmits the collected data to the wearable device processor 10, and then sends the sampled data to the connected display terminal 200-1 through the wireless communication module 40.
  • the wearable device 100-2 transmits the sampled data to the connected display terminals 200-2, . . .
  • the wearable device 100-n transmits the sampled data to the connected display terminal 200-n.
  • the display terminal 200-n After the display terminal 200-1, ..., the display terminal 200-n receives the biological earthquake information, the biological vibration information is sent to the cloud server 300, so that the cloud server 300 performs arithmetic processing to form a corresponding The user's emotional level, also known as excitement.
  • the operation processing on the biological shock information may also be performed in the wearable device 100-1, . . . , the wearable device 100-n to obtain the emotional degree of the corresponding user. And transmitting the sentiment to the display terminal 200-1, . . . , the display terminal 200-n; then, the display terminal 200-1, . . ., the display terminal 200-n forwards the sentiment to the location
  • the cloud server 300 is described.
  • the display terminal 200-1, ..., the display terminal 200-n performs an arithmetic process to obtain the emotion level, and feeds back the emotion level to the cloud server 300.
  • the cloud server 300 sends live video data to the display terminal 200-1, and the cloud server 300 calculates the emotional degree of all users under the same live channel, and The emotions of all the users are transmitted to the requested display terminal 200-1.
  • the cloud server 300 sends the sentiment of the all users to the requested display terminal 200-2, . . . , the cloud server 300 sends the sentiment of the all users to the requested display terminal 200-n. So that the mood of all users can be seen in each display terminal.
  • the live broadcast anchor holds the display terminal 200-1
  • the listener A holds the display terminal 200-2
  • the listener B holds the display terminal 200-n
  • the anchor, the listener A, and Listeners B can understand the sentiment of the anchor, listener A and listener B through their respective display terminals, so as to provide real-time video experience feedback and interaction foundation for the live platform, improve user stickiness and enhance user experience.
  • the display terminal 200-1 creates a live broadcast interface 400, which includes a video window 410 and a scroll display window 420.
  • the display terminal 200-1 plays live video data in the video window 410, and scrolls the display of the excitement of all users to the live content in the scroll display window 420.
  • the user emotion display method provided by the embodiment of the present application, by analyzing the biological earthquake information detected by the hardware sensor, deducing the emotion degree and the excitability, and then linking the emotion degree or the excitement with the live broadcast platform to realize scrolling on the live broadcast platform.
  • the real emotional responsiveness of the viewing user of the same live channel is displayed, the content is sticky, and the data is more realistic, making the live broadcast platform more attractive.
  • the user emotion display system will be specifically described below.
  • the wearable device 100-1 includes a wearable device processor 10 with a memory, a voice input component 20, an acousto-optic/vibration indicating component 30, a wireless communication module 40, and a body motion sensor 50.
  • the wearable device processor 10 is respectively connected to the voice input component 20, the acousto-optic/vibration indicating component 30, the wireless communication module 40, and the body motion sensor 50, and the connection may be a communication connection.
  • the wearable device 100-1 communicates with the wirelessly connected display terminal 200-1 through the wireless communication module 40.
  • the structure of the other wearable device 100-2 to the wearable device 100-n is the same as that of the wearable device 100-1, and therefore, details are not described herein.
  • the body motion sensor 50 such as an acceleration sensor or a gyro sensor
  • the body motion sensor 50 is used to collect the biological body vibration information of the user, and specifically, is responsible for collecting biological body vibration information such as body heart impact map information, respiratory information, body motion information, and the like.
  • the voice input component 20 is responsible for collecting and playing audio information.
  • the wearable device processor 10 is responsible for collecting the biological shock information (heart impact map information, respiratory information, body motion information, etc.) collected by the body motion sensor 50 and the audio collected by the voice input component 20
  • the information is converted into a transmission standard data packet, and the transmission standard data packet is packaged and sent to the wireless communication module 40.
  • the wireless communication module 40 is responsible for transmitting the transmission standard data packet (including the biological vibration information and the audio information) to the display terminal 200-1 by wireless transmission.
  • the acousto-optic/vibration indicating component 30 provides the user with visual or sensible prompt information by means of sound and light, vibration, and the like.
  • the voice input component 20 can include a microphone or the like; the acousto-optic/vibration indicator component 30 can include an acousto-optic indicator light, a vibration motor, and the like.
  • the wireless communication module 40 can be a Bluetooth communication module or the like.
  • the wearable device 100-1 acquires the biological body vibration information of the user based on the time domain, and forwards the biological earthquake information by the display terminal 200-1.
  • the cloud server 300 extracts heart impact information from the biological earthquake information, and establishes a heart impact information frame according to the heart impact information; and performs frequency domain analysis on each heart impact information frame to extract the heart impact information.
  • the heart impact feature information, and the heart impact feature information and the emotion degree comparison library are queried according to the heart impact feature information, and the target emotion degree is determined by querying.
  • the display terminal 200-1 scrolls the target emotion degree in real time.
  • the target emotion degree is a first emotion degree corresponding to the heart impact feature information.
  • the heart impact feature information is extracted from the heart impact information (Ballistocardiogram, BCG).
  • the heart impact characteristic information may be a frequency domain and a time domain parameter of the HRV: a RR interval (RRI), a standard deviation of all sinus beat RR intervals (standard diviation of NN Intervals, SDNN), High Frequency (HF), Low Frequency (HF), LF/HF, and Total Power (TP).
  • the HF and TP are compared, and the target emotion degree can be determined by combining the heart impact characteristic information and the emotion comparison library.
  • the cloud server 300 separates the respiratory information from the biological seismic information; and performs the biological seismic information after separating the respiratory information. Filtering noise reduction processing, and subtracting the breathing information to obtain the heart impact information. Among them, combined with the breathing information, the user can analyze more characteristic information of the heart impact, so that the emotional degree calculation is more accurate.
  • the cloud server 300 may extract breathing information from the biological earthquake information, and establish a breathing information frame according to the breathing information; and extract the breathing information for each breathing information frame in the breathing information frame.
  • the respiratory characteristic information is obtained by querying the respiratory characteristic information and the emotional degree comparison library according to the respiratory characteristic information, and determining a second emotional degree corresponding to the respiratory characteristic information; and the second emotional degree is used for correcting the target emotional degree.
  • the breathing information the user analyzes more heart impact feature information, so that the user's emotion is more accurately reflected.
  • the display terminal 200-1 of the present embodiment includes a first processor 210, a first memory 220, a Bluetooth module 230, a wireless network module 240, and a display 250.
  • the first processor 210 is respectively connected to the first memory 220, the Bluetooth module 230, the wireless network module 240, and the display 250, and the connection may be a communication connection.
  • the first processor 210 is further connected to the wearable device 100-1 and the cloud server 300, and the first processor 210 communicates with the wirelessly connected wearable device 100-1 through the Bluetooth module 230.
  • the wireless network module 240 communicates with the wirelessly connected cloud server 300.
  • the structure of the other display terminals 200-2 to 200-n is the same as that of the display terminal 200-1, and therefore, details are not described herein.
  • the first memory 220 stores instructions executable by the first processor 210, the instructions being executable by the first processor 210.
  • the first processor 210 is configured to: acquire biometric information of a user collected by the wearable device, and forward the biometric information to The cloud server; and the control display to display the target sentiment sent from the cloud server in real time.
  • the first processor 210 performs a control display to display the target emotion degree in real time, specifically: acquiring target emotion degrees of all users under the same live channel, and recording a time stamp corresponding to the target emotion degree; And setting a video window and a scrolling display window in the live broadcast interface displayed by the terminal; and in the scroll display window, controlling, by the time stamp, the display to scroll display the received text or animation corresponding to the target emotion degree.
  • the scroll display window 420 is displayed in the manner of the first display received first.
  • the process of generating, by the first processor 210, each target sentiment of the target sentiment of all users under the same live channel is: the cloud server 300 extracts the heart impact information from the biometric information, and according to The heart impact information establishes a heart impact information frame; performing frequency domain analysis on each heart impact information frame to extract heart impact characteristic information of the heart impact information, and querying the heart impact characteristic information and the emotion degree according to the heart impact feature information Library, the query gets the mood of each target.
  • the cloud server 300 of this embodiment connects a plurality of display terminals through a network.
  • the cloud server 300 includes a second processor 310 and a second memory 320.
  • the second processor 310 is respectively connected to the second memory 320 and the display terminal 200-1, . . . , the display terminal 200-n.
  • This connection can be a communication connection.
  • the second memory 320 stores instructions executable by the second processor 310, the instructions being executed by the second processor 310 to cause the second processor to perform: receiving the display terminal to forward
  • the biological shock information is extracted from the biological earthquake information, and a heart impact information frame is established according to the cardiac shock information; and each heart impact information frame in the heart shock information frame is frequency-transmitted Domain analysis extracts heart impact feature information of the heart impact information; queries the heart impact feature information and the emotion degree comparison library according to the heart impact feature information, determines a target emotion degree, and sends the target emotion degree to the display terminal
  • the target emotion degree is a first emotion degree corresponding to the heart impact feature information.
  • the second processor 310 is further configured to: extract respiratory information from the biological earthquake information, and according to the method of extracting accurate user parameters from the biological seismic information, and accurately reflecting the user's emotions.
  • Breathing information establishes a respiratory information frame; extracting respiratory characteristic information of the respiratory information for each respiratory information frame in the respiratory information frame, and querying respiratory characteristic information and emotional degree comparison library according to the respiratory characteristic information, determining and a second sentiment corresponding to the respiratory characteristic information; correcting the target emotional degree based on the first emotional degree and the second emotional degree.
  • the user analyzes more heart impact feature information, so that the user's emotion is more accurately reflected.
  • the second processor 310 performs the extraction of the heart impact information from the biological earthquake information, and specifically includes: separating the respiratory information from the biological seismic information; and the living organism after separating the respiratory information
  • the earthquake information is subjected to filtering and noise reduction processing, and the heart impact information is obtained after subtracting the breathing information.
  • the second processor is further configured to: extract body motion information from the biological earthquake information, And generating a body motion information frame according to the body motion information; extracting body motion feature information of the body motion information for each body motion information frame in the body motion information frame, and querying body motion characteristics according to the body motion feature information
  • the information and emotion comparison library determines a third emotion degree corresponding to the body motion characteristic information; and the target emotion degree is corrected by using the third emotion degree.
  • the second processor 310 is further configured to: group users according to the requested live channel. Specifically, the user can perform user registration through the cloud server 300, and the cloud server 300 further groups the registered users according to the requested live channel.
  • the second processor 310 provides live video data to the request display terminal 200-1, and sends the target emotion degree of all users in the same live channel group to the location.
  • the display terminal 200-1 is described.
  • the display terminal 200-1 is configured to create a live broadcast interface 400, and play live video data in the video window 410, while scrolling display of the excitement of all users to the live content in real time in the scroll display window 420.
  • the user emotion display system of the embodiment associates the emotion degree or the excitement with the live broadcast platform, and realizes the real emotional responsiveness of the viewing user who scrolls the same live channel on the live broadcast platform in real time, and the content is sticky and the data is more realistic.
  • FIG. 6 is a schematic flowchart of a method for displaying a user emotion according to an embodiment of the present disclosure.
  • the user emotion display method is applied to a user emotion display system, where the user emotion display system includes a wearable device, a display terminal, and a cloud server.
  • the method for displaying the user's emotions may be performed by the cloud server, the wearable device, or the display terminal, which is not limited in the embodiment of the present application.
  • the user emotion display method mainly includes the following steps:
  • Step 101 Acquire bio-physical information collected by the wearable device.
  • the bio-seismic information may be bio-physical information of the user that is collected by the wearable device based on the time domain. After the wearable device collects the biological earthquake information, the biological vibration information may be sent to the display terminal, and then the display terminal forwards the biological earthquake information, and the cloud server may Receiving the biological earthquake information forwarded by the display terminal, thereby acquiring the biological earthquake information.
  • Step 102 Extract heart attack information from the biological earthquake information, and establish a heart impact information frame according to the heart impact information.
  • Extracting the cardiac impact information from the biological earthquake information specifically includes: separating respiratory information from the biological seismic information; performing filtering and noise reduction processing on the biological seismic information after separating the respiratory information, and then The heart impact information is acquired after subtracting the respiratory information.
  • Step 103 Perform frequency domain analysis on each heart impact information frame in the heart impact information frame to extract heart impact feature information of the heart impact information.
  • the heart impact feature information may be a frequency domain and a time domain parameter of the HRV: RRI, SDNN, HF, LF, LF/HF, and TP.
  • Step 104 Query the heart impact feature information and the emotion degree comparison library according to the heart impact feature information, determine a target emotion degree, and send the target emotion degree to the display terminal, so that the display terminal displays the target in real time.
  • the degree of emotion, the target mood is a first degree of emotion corresponding to the heart impact feature information.
  • the target sentiment is sent to the display terminal, so that the target sentiment can be displayed in real time in the display terminal, so that the display terminal can know the target user. Emotionality.
  • the displaying, by the display terminal, the target sentiment degree in real time includes: grouping users according to the requested live channel; acquiring target emotions of all users under the same live channel, and recording time stamps; setting video on the live interface a window and a scrolling display window; In the scroll display window, a text or an animation corresponding to the received target emotion degree is scrolled according to the time stamp.
  • the scroll display window may be displayed in a first display manner.
  • the target emotional degree is sent to the display terminal by performing the method step shown in FIG. 6 , so that the display terminal displays the real-time display in the real-time manner.
  • other heart impact feature information may also be extracted from the biological earthquake information to correct the target emotional degree to obtain a more accurate and more realistic expression.
  • body motion information and respiratory information are extracted from the biological body earthquake information. Specifically include:
  • Step 202 Extract breathing information from the biological earthquake information, and establish a breathing information frame according to the breathing information;
  • Step 203 Extract respiratory characteristic information of the respiratory information for each respiratory information frame in the respiratory information frame, query a respiratory feature information and a mood comparison library according to the respiratory feature information, and determine to correspond to the respiratory feature information. Second emotional degree;
  • Step 204 Correct the target mood level based on the first mood level and the second mood level.
  • the method further includes
  • Step 302 Extract body motion information from the biological body earthquake information, and establish a body motion information frame according to the body motion information;
  • Step 303 Extract body motion feature information of the body motion information for each body motion information frame in the body motion information frame, and query body motion feature information and emotion degree comparison library according to the body motion feature information to determine and Describe a third emotional degree corresponding to the body motion characteristic information;
  • Step 304 Correct the target emotional degree by using the third emotional degree.
  • the user emotion display method may also be performed by a wearable device or a display terminal.
  • the step 101 acquires the biological body vibration information collected by the wearable device as: collecting the biological body vibration information; and when the user emotion display method is
  • the step 104 queries the heart impact feature information and the emotion degree comparison library according to the heart impact feature information, determines a target emotion degree, and sends the target emotion degree to the display terminal, so that The display terminal displays the target emotion degree in real time, and the first emotion degree corresponding to the heart impact feature information is expressed as: querying the heart impact feature information and the emotion degree according to the heart impact feature information
  • the library determines the target sentiment and displays the target sentiment in real time, the target sentiment being the first sentiment corresponding to the cardiac impact feature information.
  • the user emotion display method and system of the present application analyzes the biological earthquake information detected by the hardware sensor, and derives the emotion and excitement of all users under the same live channel, and associates the emotion degree or the excitement with the live broadcast platform. Realize the true emotional responsiveness of the viewing user who scrolls the same live channel on the live broadcast platform, and the data is true.
  • the emotional excitement level is displayed in real time through the partial space of the screen, and the data is real and interactive.
  • the live broadcast platform can analyze the collected user parameters reflecting the real emotion of the user to guide the production of the live content, make the live content more popular, or do the program classification, and refine the live broadcast content of the live platform from the perspective of the user's real experience.
  • FIG. 8 is a schematic diagram of a user emotion display device according to an embodiment of the present application.
  • the user emotion display device can be configured in a cloud server.
  • the user emotion display device 80 includes:
  • the biological body information acquisition unit 801 is configured to acquire biological body vibration information collected by the wearable device.
  • the bio-seismic information may be bio-physical information of the user that is collected by the wearable device based on the time domain. After the wearable device collects the biological earthquake information, the biological earthquake information may be sent to the display terminal, and then the display terminal forwards the biological earthquake information, and the biological seismic information The acquiring unit 801 can receive the biological earthquake information forwarded by the display terminal, so as to obtain the biological earthquake information.
  • the heart impact information extracting unit 802 is configured to extract heart impact information from the biological shock information, and establish a heart impact information frame according to the cardiac impact information.
  • the heart impact information extracting unit 802 extracts the heart impact information from the biological shock information, specifically: separating the respiratory information from the biological seismic information; and performing the biological seismic information after separating the respiratory information Filtering noise reduction processing, and subtracting the breathing information to obtain the heart impact information.
  • the heart impact feature information extracting unit 803 is configured to perform frequency domain analysis on each heart impact information frame in the heart impact information frame to extract heart impact feature information of the heart impact information.
  • the heart impact feature information may be a frequency domain and a time domain parameter of the HRV: RRI, SDNN, HF, LF, LF/HF, and TP.
  • the target emotion degree determining unit 804 is configured to query the heart impact feature information and the emotion degree comparison library according to the heart impact feature information, determine the target emotion degree, and send the target emotion degree to the display terminal, so that the display terminal is real-time
  • the target mood level is displayed, and the target mood level is a first mood level corresponding to the heart impact feature information.
  • the target emotion degree determination unit 804 determines the target emotion degree
  • the target emotion degree is transmitted to the display terminal, so that the target emotion degree can be displayed in real time on the display terminal, so that the display terminal is held Can understand the emotional level of the target user.
  • the target sentiment determination unit 804 causes the display terminal to display the target emotion degree in real time, specifically: grouping users according to the requested live channel; acquiring target emotions of all users under the same live channel, and recording time stamps ; set the video window and scroll display window in the live interface; In the scroll display window, a text or an animation corresponding to the received target emotion degree is scrolled according to the time stamp.
  • the scroll display window may be displayed in a first display manner.
  • the user emotion display device 80 further includes:
  • the breathing information extracting unit 805 is configured to extract breathing information from the biological earthquake information, and establish a breathing information frame according to the breathing information.
  • the second sentiment determination unit 806 is configured to extract respiratory characteristic information of the respiratory information for each respiratory information frame in the respiratory information frame, and query the respiratory characteristic information and the emotional degree comparison library according to the respiratory characteristic information to determine a second degree of emotion corresponding to the respiratory characteristic information.
  • the first correcting unit 807 is configured to correct the target emotional degree based on the first emotional degree and the second emotional degree.
  • the body motion information extracting unit 808 is configured to extract body motion information from the biological body motion information, and establish a body motion information frame according to the body motion information.
  • the third emotion degree determining unit 809 is configured to extract body motion feature information of the body motion information for each body motion information frame in the body motion information frame, and query body motion feature information and emotion according to the body motion feature information And comparing the library, determining a third emotion level corresponding to the body motion characteristic information.
  • the second correcting unit 810 is configured to correct the target emotional degree by using the third emotional degree.
  • the user emotion display device 80 can execute the user emotion display method provided in Embodiment 2 of the present application, and has a function module and a beneficial effect corresponding to the execution method.
  • the user emotion display method provided in Embodiment 2 of the present application can execute the user emotion display method provided in Embodiment 2 of the present application.
  • the audio and video test device 90 includes a processor 901, a memory 902, a communication interface (not shown), and a bus.
  • the processor 901, the memory 902, and the communication interface are connected by the bus and complete communication with each other; the memory stores executable program code; the processor reads the memory stored in the memory by reading Program code is executed to execute a program corresponding to the executable program code to perform a user emotion display method as described above.
  • One or more processors 901 and memory 902, one processor 901 is taken as an example in FIG.
  • the processor 901, the memory 902, and the communication interface may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 902 is a non-volatile computer readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable instructions.
  • the processor 901 executes various functional applications and data processing of the user emotion display device 90 by running non-volatile software programs, instructions stored in the memory 902, that is, implementing the user emotion display method of the method embodiment.
  • the memory 902 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created by the user's emotion display device 90, and the like.
  • memory 902 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 902 can optionally include memory remotely located relative to processor 901, which can be connected to user emotion display device 90 over a network.
  • Embodiments of the network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the one or more instructions are stored in the memory 902, and when executed by the one or more processors 901, perform a user emotion display method of the arbitrary method embodiment, for example, performing FIG. 6 described above Method steps 101 to 104 in the process.
  • the user emotion display device 90 can execute the user emotion display method provided by the embodiment of the present application, and has a function module and a beneficial effect corresponding to the execution method.
  • a function module and a beneficial effect corresponding to the execution method For the technical details that are not described in detail in the embodiment of the user's emotion display device, refer to the user emotion display method provided by the embodiment of the present application.
  • the embodiment of the present application provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instruction is emotional by a user
  • the display device 90 executes, the user emotion display device 90 is caused to perform the user emotion display method as described above. For example, method steps 101 through 104 of FIG. 6 described above are performed.
  • Embodiments of the present application provide a non-transitory computer readable storage medium storing computer-executable instructions that are executed by one or more processors, as described above User emotion display method. For example, method steps 101 through 104 of FIG. 6 described above are performed.
  • the user emotion display method, system, and household emotion display device install a body motion sensor, such as an acceleration sensor, a gyroscope, a piezoresistive sensor, or a bioelectrode, on the wearable device to enable
  • a body motion sensor such as an acceleration sensor, a gyroscope, a piezoresistive sensor, or a bioelectrode
  • the wearable device is simple in design and convenient to use; after the wearable device acquires the biological vibration information of the user based on the time domain, the display terminal connected to the wearable device forwards the acquired biological earthquake information to the cloud server, and the cloud server is shocked from the biological body.
  • the heart impact information is extracted from the information, and the heart impact feature information is extracted from the heart impact information, and the current true emotional degree of the user wearing the wearable device is determined based on the heart impact feature information query, and the live content is synchronously displayed to represent the
  • the emotional text or dynamic image provides a real video experience feedback and interaction basis for the live broadcast platform, enhances the user's stickiness and enhances the user experience; the user emotion display method, system and household emotion display device of the embodiment will be emotional or excited. Linkage with the live broadcast platform to make the live broadcast platform
  • the live broadcast interface is more entertaining, enjoyable, interactive, and more intense; and, in this embodiment, the user emotion display method, system, and household emotion display device, based on the real biological shock information collected by the body shock sensor, reflects the user's reality.
  • the emotional big data of excitement can make users who are introverted to express real ideas directly display their excitement; and, by collecting, analyzing and sorting the emotional texts or dynamic images generated by some live users, generate reference information.
  • the live content can be optimized, such as doing a live broadcast user crowd program classification or helping the live broadcast platform to refine the service content.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Pathology (AREA)
  • Educational Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Child & Adolescent Psychology (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Graphics (AREA)
  • Computer Security & Cryptography (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种用户情绪显示方法、装置及系统,涉及视频直播技术领域,该用户情绪显示系统包括可穿戴装置(100-n)、显示终端(200-n)以及云端服务器(300),该方法包括:获取可穿戴装置采集的生物体震信息(步骤101);从生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧(步骤102);对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息(步骤103);根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,使所述显示终端实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度(步骤104)。通过所述显示终端(200-n)的所述第一情绪度可以实时了解目标对象的情绪。

Description

一种用户情绪显示方法、系统及用户情绪显示设备 技术领域
本申请涉及视频直播技术领域,具体涉及一种用户情绪显示方法、用户情绪显示设备,以及应用该用户情绪显示方法的用户情绪显示系统。
背景技术
随着网络传输技术、无线近场通信技术以及移动终端数据处理技术的发展,网络直播,比如体育赛事直播或者有线视讯的直播等,成为用户越来越青睐的节目观看方式。用户可以借助无线网络数据传输,随时随地观看喜欢的直接节目。
网络视频直播是有视频媒体的高端形态。从信息传播的角度来看,直播可以让用户与直播现场实时地连接,提供真实直接的视频观看体验。因为真实性,直播节目的不可预料性对用户来说很有吸引力,带给用户想象空间和惊喜。而其强大的互动性也拉近了粉丝和主播的距离。
但是现有的网络直播技术领域,技术开发关注点还是在直播流数据或者优化网络传输上,并未针对直播频道的内容提供相应体验更好的互动。并且,对于很多内向不善于表达的直播观众,即使设置了直播互动环节,对于内向的观众来说也很难参与到直播互动环节。并且,直播观众在互动环节的互动信息过于随意,互动环节对直播用户来说吸引力不大,并且互动过程无法成为指导直播内容的直播平台的可挖掘信息。
因此,现有技术的网络直播技术还有待于改进。
发明内容
本申请针对以上要解决的技术问题,提供一种用户情绪显示方法、系统及用户情绪显示设备,通过可穿戴装置获取用户的生物体震信息,并从生物体震信息中提取心冲击信息(Ballistocardiography,BCG),再从该心冲击信息中提取出心冲击特征信息,基于心冲击特征信息查询确定用户当前真实的情绪度,为直播平台等提供真实的视频体验反馈和交互基础,提升用户粘性以及提升用户体验。
第一方面,本申请实施例提供了一种用户情绪显示方法,所述用户情绪显示方法应用于用户情绪显示系统,所述用户情绪显示系统包括可穿戴装置以及显示终端,所述方法包括:
获取所述可穿戴装置采集的生物体震信息;
从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;
对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息;
根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,使所述显示终端实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度。
在一些实施例中,所述方法还包括:
从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;
对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;
基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。
在一些实施例中,所述方法还包括:
从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧;
对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度;
利用所述第三情绪度修正所述目标情绪度。
在一些实施例中,所述从所述生物体震信息中提取心冲击信息,包括:
从所述生物体震信息中分离出呼吸信息;
对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
在一些实施例中,所述使所述显示终端实时显示所述目标情绪度,包括:
根据请求的直播频道对用户进行分组;
获取相同直播频道下所有用户的目标情绪度,并记录与所述目标情绪度对应的时间戳;
在所述显示终端显示的直播界面中设置视频窗口以及滚动显示窗口;
在所述滚动显示窗口中,根据所述时间戳滚动显示接收的所述目标情绪度对应的文字或者动图。
第二方面,本申请实施例还提供了一种用户情绪显示装置,所述装置包括:
生物体震信息获取单元,用于获取可穿戴装置采集的生物体震信息;
心冲击信息提取单元,用于从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;
心冲击特征信息提取单元,用于对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息;
目标情绪度确定单元,用于根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至显示终端,使所述显示终端实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度。
在一些实施例中,所述装置还包括:
呼吸信息提取单元,用于从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;
第二情绪度确定单元,用于对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;
第一修正单元,用于基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。
在一些实施例中,所述装置还包括:
体动信息提取单元,用于从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧;
第三情绪度确定单元,用于对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度;
第二修正单元,用于利用所述第三情绪度修正所述目标情绪度。
在一些实施例中,所述心冲击信息提取单元从所述生物体震信息中提取心冲击信息,包括:
从所述生物体震信息中分离出呼吸信息;
对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
在一些实施例中,所述目标情绪度确定单元使所述显示终端实时显示所述目标情绪度,包括:
根据请求的直播频道对用户进行分组;
获取相同直播频道下所有用户的目标情绪度,并记录与所述目标情绪度对应的时间戳;
在所述显示终端显示的直播界面中设置视频窗口以及滚动显示窗口;
在所述滚动显示窗口中,根据所述时间戳滚动显示接收的所述目标情绪度对应的文字或者动图。
第三方面,本申请实施例还提供了一种用户情绪显示设备,该用户情绪显示设备包括:处理器、存储器、通信接口和总线;所述处理器、所述存储器和所述通信接口通过所述总线连接并完成相互间的通信;所述存储器存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以执行如上所述的用户情绪显示方法。
第四方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如上所述的用户情绪显示方法。
第五方面,本申请实施例还提供了一种用户情绪显示系统,包括可穿戴装置、显示终端以及云端服务器,所述可穿戴装置连接显示终端,所述显示终端连接云端服务器;
所述可穿戴装置用于采集用户的生物体震信息;
所述显示终端包括第一处理器与第一存储器,所述第一处理器分别与所述第一存储器及所述云端服务器连接,所述第一存储器存储有可被所述第一处理器执行的指令,所述指令被所述第一处理器执行,以使所述第一处理器执行:获取所述可穿戴装置采集的用户的生物体震信息,并将所述生物体震信息转发至所述云端服务器;
所述云端服务器包括第二处理器与第二存储器,所述第二处理器分别与所述第二存储器及所述显示终端连接,所述第二存储器存储有可被所述第二处理器执行的指令,所述指令被所述第二处理器执行,以使所述第二处理器执行:接收所述显示终端转发的所述生物体震信息;从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息;根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度;
所述第一处理器还用于执行控制所述显示终端的显示器实时显示所述目标情绪度。
在一些实施例中,所述第二处理器还用于执行:
从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;
对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;
基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。
在一些实施例中,所述第二处理器还用于执行:
从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧;
对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度;
利用所述第三情绪度修正所述目标情绪度。
在一些实施例中,所述第二处理器执行从所述生物体震信息中提取心冲击信息,包括:
从所述生物体震信息中分离出呼吸信息;
对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
在一些实施例中,所述第二处理器还用于执行:根据请求的直播频道对用户进行分组;
所述第一处理器执行控制所述显示终端的显示器实时显示所述目标情绪度包括:获取相同直播频道下所有用户的目标情绪度,并记录与所述目标情绪度对应的时间戳;在所述显示终端显示的直播界面中设置视频窗口以及滚动显示窗口;在所述滚动显示窗口中,根据所述时间戳控制所述显示终端的显示器滚动显示接收的所述目标情绪度对应的文字或者动图。
第六方面,本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被云端服务器执行时,使所述云端服务器执行如上所述的用户情绪显示方法。
第七方面,本申请实施例还提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使云端服务器执行如上所述的用户情绪显示方法。
本申请的有益效果在于,本申请实施例提供的用户情绪显示方法、系统及用户情绪显示设备,在可穿戴装置上安装体震传感器,比如加速度传感器、陀螺仪、压阻式传感器或者生物电极,该可穿戴装置获取用户的生物体震信息后,连接可穿戴装置的显示终端将获取的生物体震信息转发至云端服务器,该云端服务器从生物体震信息中提取心冲击信息,再从该心冲击信息中提取出心冲击特征信息,基于该心冲击特征信息查询确定佩戴可穿戴装置的用户当前真实的情绪度,为直播平台提供真实的视频体验反馈和交互基础,提升用户粘性以及提升用户体验,使直播平台的直播界面更具娱乐性、可观赏性、互动性、氛围更强烈。并且,本申请实施例提供的用户情绪显示方法、系统及用户情绪显示设备可以基于体震传感器采集的真实生物体震信息,以及反映用户真实兴奋度的情绪度大数据,搜集、分析以及整理若干直播用户产生的情绪度文字或者动态图像生成参考信息,基于参考信息优化直播系统,比如做直播用户人群节目分类或者帮助直播平台细化服务内容。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供的用户情绪显示系统的系统框架图;
图2是本申请实施例提供的用户情绪显示系统的可穿戴装置的硬件结构图;
图3是本申请实施例提供的用户情绪显示系统的显示终端的模块结构图;
图4是本申请实施例提供的用户情绪显示系统的显示终端直播界面示意图;
图5是本申请实施例提供的用户情绪显示系统的云端服务器的模块结构图;
图6是本申请实施例提供的用户情绪显示方法的主要流程图;
图7是本申请实施例提供的用户情绪显示方法的另一流程图;
图8是本申请实施例提供的用户情绪显示装置的示意图;
图9是本申请实施例提供的用户情绪显示设备硬件结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例提供的用户情绪显示方法以及系统,涉及可穿戴装置、显示终端以及云端服务器,该可穿戴装置与显示终端无线连接或者有线连接,该显示终端与云端服务器通过有线或者无线网络进行数据通信。本申请实施例在可穿戴装置上安装体震传感器,比如加速度传感器、陀螺仪、压阻式传感器或者生物电极,该可穿戴装置获取用户的生物体震信息(如基于时域获取用户的生物体震信息)后,连接可穿戴装置的显示终端将获取的生物体震信息转发至云端服务器,该云端服务器从生物体震信息中提取心冲击信息,心冲击信息是一种被广泛的研究的生物体震信息。再从该心冲击信息中提取出心冲击特征信息,基于该心冲击特征信息查询确定佩戴可穿戴装置对应用户当前真实的情绪度。同时,在显示终端请求直播数据时,在显示终端的直播界面中播放直播内容时同步滚动显示观看同一直播频道的所有用户的表征其情绪度的文字或者动态图像,为直播平台提供真实的视频体验反馈和交互基础,提升用户粘性以及提升用户体验。
请参考图1,为本申请实施例提供的用户情绪显示系统。该用户情绪显示系统包括若干可穿戴装置、若干显示终端以及云端服务器300。其中,所述若干可穿戴装置包括有可穿戴装置100-1,…,可穿戴装置100- n。所述若干显示终端包括有显示终端200-1,…,显示终端200-n。所述可穿戴装置100-1连接对应的显示终端200-1,类似的,所述可穿戴装置100-n连接对应的显示终端200-n。所述显示终端200-1,…,显示终端200-n均连接云端服务器300,并且,所述显示终端200-1,…,显示终端200-n均可接收该云端服务器300的直播视频数据。所述云端服务器300也是直播平台的服务器,同时所述云端服务器300还可提供直播视频流数据。
以下介绍以可穿戴装置100-1和显示终端200-1为例加以说明。
请参考图2,本申请实施例中的可穿戴装置100-1为了获取用户的生物体震信息,设置有体震传感器50。所述体震传感器50可以为贴近心脏直接接触人体的电极或者传感器,也可以是不直接接触人体的体震传感器。其中,直接接触人体的电极或者传感器目前使用普遍,测量心率时需要使用电极或传感器直接接触人体,会对人体产生一定约束。而不直接接触人体的体震传感器,比如加速度传感器和/或陀螺仪传感器,可以不直接与人体接触而获取得到心脏跳动对人体产生的微弱震动信号,即生物体震信息。然后,通过对所述生物体震信息进行信号分析,便可达到不直接接触人体即可无感觉测量心率的效果。或者,所述体震传感器50也可以采用压阻式传感器,当心脏向外泵血时,身体就会产生与促使血液流动的力相反的反作用力,该反作用力可以在体表上通过敏感的力传感器测量出来。
在一些实施例中,该体震传感器50还可以是聚偏氟乙烯制作的传感器、重力传感器、织物电极、位移传感器、压电电缆、光电传感器或生物电电极。其中,生物电电极可以为氯化银电极(Ag/AgCl/Cl-),该氯化银电极可用于检测基本生物体震信息,再通过心率变异性(Heart Rate Variability,HRV)分析换算为人的情绪度,亦即兴奋程度。
其中,心率变异性是指逐次心跳间期,亦即瞬时心率,之间存在的微小差异或微小涨落现象。心率变异性分析是把心搏的微小变化,转换为波形方式进行分析,将人体自主神经对于压力的应急反应可视化,以实时确认人的健康状况或者精神、生理上的稳定状态。
具体的,所述可穿戴装置100-1包括带有存储器的可穿戴装置处理器10、语音输入组件20、声光/振动指示组件30、无线通信模块40以及体震传感器50。所述可穿戴装置处理器10分别与所述语音输入组件20、声光/振动指示组件30、无线通信模块40以及体震传感器50连接,该连接可以为通信连接。所述可穿戴装置100-1通过所述无线通信模块40与无线连接的显示终端200-1通信。其中,其它可穿戴装置100-2至可穿戴装置100-n的结构与所述可穿戴装置100-1的结构相同,因此,在此处不赘述。
其中,所述体震传感器50可以为内嵌的加速度传感器或者陀螺仪传感器,对人体震动的微弱信息也即生物体震信息进行采样。例如,对心冲击运动图、呼吸运动、体动等基础信息基于时域进行采样。并且,所述可穿戴装置100-1将采样数据发送至连接的显示终端200-1。具体的,所述体震传感器50将采集数据发送至所述可穿戴装置处理器10,再通过所述无线通信模块40将所述采样数据发送至连接的显示终端200-1。类似的,所述可穿戴装置100-2将采样数据发送至连接的显示终端200-2,…,所述可穿戴装置100-n 将采样数据发送至连接的显示终端200-n。
所述显示终端200-1,…,显示终端200-n接收到所述生物体震信息后,分别将所述生物体震信息发送至云端服务器300,以便所述云端服务器300进行运算处理形成对应用户的情绪度,亦称兴奋度。
可以理解的是,在一些实施例中,对于所述生物体震信息的运算处理,也可以在可穿戴装置100-1,…,可穿戴装置100-n中进行,以得到对应用户的情绪度,并将所述情绪度发送至所述显示终端200-1,…,显示终端200-n;然后,所述显示终端200-1,…,显示终端200-n将所述情绪度转发至所述云端服务器300。或者,在所述显示终端200-1,…,显示终端200-n进行运算处理,以得到所述情绪度,并将所述情绪度反馈至所述云端服务器300。
所述显示终端200-1在请求直播数据时,所述云端服务器300向所述显示终端200-1发送直播视频数据,同时,所述云端服务器300计算同一直播频道下所有用户的情绪度,并将所述所有用户的情绪度发送至请求的显示终端200-1。类似的,所述云端服务器300将所述所有用户的情绪度发送至请求的显示终端200-2,…,所述云端服务器300将所述所有用户的情绪度发送至请求的显示终端200-n,以便在每个显示终端中均可看到所有用户的情绪度。例如,在进行直播时,直播的主播持有所述显示终端200-1,听众A持有所述显示终端200-2,听众B持有所述显示终端200-n,则主播、听众A以及听众B均可通过各自所持的显示终端,了解到主播、听众A以及听众B的情绪度,以便为直播平台提供真实的视频体验反馈和交互基础,提升用户粘性以及提升用户体验。
请一并参考图4,所述显示终端200-1创建直播界面400,所述直播界面400包括视频窗口410以及滚动显示窗口420。所述显示终端200-1在所述视频窗口410播放直播视频数据,在所述滚动显示窗口420滚动显示所有用户对直播内容的兴奋度。
本申请实施例提供的用户情绪显示方法,通过分析由硬件传感器检测的生物体震信息,推导出情绪度和兴奋度,再将情绪度或者兴奋度与直播平台进行联动,实现在直播平台上滚动显示同一直播频道的观看用户的真实情感响应度,内容粘性大,数据更真实,使直播平台更具有吸引力。
实施例1
以下具体介绍该用户情绪显示系统。
如图2所示,所述可穿戴装置100-1包括带有存储器的可穿戴装置处理器10、语音输入组件20、声光/振动指示组件30、无线通信模块40以及体震传感器50。所述可穿戴装置处理器10分别与所述语音输入组件20、声光/振动指示组件30、无线通信模块40以及体震传感器50连接,该连接可以为通信连接。所述可穿戴装置100-1通过所述无线通信模块40与无线连接的显示终端200-1通信。其中,其它可穿戴装置100-2至可穿戴装置100-n的结构与所述可穿戴装置100-1的结构相同,因此,在此处不赘述。
首先,所述体震传感器50,比如加速度传感器或者陀螺仪传感器用于采集用户的生物体震信息,具体的,负责采集接收人体心冲击图信息、呼吸信息、身体运动信息等生物体震信息。所述语音输入组件20负责采集及播放音频信息。然后,所述可穿戴装置处理器10负责将所述体震传感器50所采集到生物体震信息(心冲击图信息、呼吸信息、身体运动信息等)以及所述语音输入组件20所采集到音频信息转换成传输标准数据包,再将所述传输标准数据包打包发送给所述无线通信模块40。最后,所述无线通信模块40负责将传输标准数据包(包括生物体震信息及音频信息)通过无线传输方式发送到所述显示终端200-1。并且,所述声光/振动指示组件30通过声光、震动等方式为用户提供可视或可感受的提示信息。其中,所述语音输入组件20可以包括麦克风等;所述声光/振动指示组件30可以包括声光指示灯及振动马达等。所述无线通信模块40可为蓝牙通信模块等。
为了实现直播同时滚动显示所有用户真实的观看感受,所述可穿戴装置100-1基于时域,获取用户的生物体震信息,并由所述显示终端200-1将所述生物体震信息转发给所述云端服务器。所述云端服务器300从所述生物体震信息,提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;以及用于对每一心冲击信息帧进行频域分析提取该心冲击信息的心冲击特征信息,根据该心冲击特征信息查询心冲击特征信息与情绪度对照库,查询确定目标情绪度。所述显示终端200-1实时滚动显示该目标情绪度。其中,所述目标情绪度为与该心冲击特征信息对应的第一情绪度。
其中,所述心冲击特征信息实施例具体介绍如下:所述心冲击特征信息从心冲击信息(Ballistocardiogram ,BCG)中提取得到的特征信息。例如,所述心冲击特征信息可以为HRV的频域和时域参数:RR间期(RRI)、全部窦性心搏RR间期的标准差(standard diviation of NN intervals,SDNN)、高频(High Frequency,HF)、低频(Low Frequency,HF)、LF/HF和总功率(TP)等。计算单位时段的参数值,记录每种参数的变化趋势,并与心冲击信息提取模块331中记录的、现有研究中已知的基本情绪状态的HRV参数RRI、SDNN、HF 、LF、LF/HF和TP进行比对,结合心冲击特征信息与情绪度对照库,便可确定目标情绪度。
同时,为了从所述生物体震信息中提取更多的用户参数,所述云端服务器300从所述生物体震信息中分离出呼吸信息;对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。其中,结合呼吸信息可以分析出用户更多的心冲击特征信息,以使情绪度推算更准确。例如,所述云端服务器300可从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;以及对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;所述第二情绪度用于修正所述目标情绪度。通过呼吸信息分析出用户更多的心冲击特征信息,从而使得更准确反映用户的情绪。
请参考图3,本实施例的显示终端200-1包括第一处理器210、第一存储器220、蓝牙模块230、无线网络模块240及显示器250。所述第一处理器210分别与所述第一存储器220、蓝牙模块230、无线网络模块240及显示器250连接,该连接可以为通信连接。并且,所述第一处理器210还与所述可穿戴装置100-1及云端服务器300连接,所述第一处理器210通过所述蓝牙模块230与无线连接的可穿戴装置100-1通信,通过所述无线网络模块240与无线连接的云端服务器300进行通信。其中,其它显示终端200-2至显示终端200-n的结构与所述显示终端200-1的结构相同,因此,在此处不赘述。
所述第一存储器220存储有可被所述第一处理器210执行的指令,所述指令可被所述第一处理器210执行。当所述指令被所述第一处理器210执行,以使所述第一处理器210执行:获取所述可穿戴装置采集的用户的生物体震信息,并将所述生物体震信息转发至所述云端服务器;以及控制显示器实时显示来自所述云端服务器发送的所述目标情绪度。
所述第一处理器210执行控制显示器实时显示所述目标情绪度,具体包括:获取相同直播频道下所有用户的目标情绪度,并记录与所述目标情绪度对应的时间戳;在所述显示终端显示的直播界面中设置视频窗口以及滚动显示窗口;在所述滚动显示窗口中,根据所述时间戳控制显示器滚动显示接收的所述目标情绪度对应的文字或者动图。其中,为了合理显示所述目标情绪度对应的文字或者动图,采用先接收的先显示的方式显示于所述滚动显示窗口420。
其中,所述第一处理器210获取相同直播频道下所有用户的目标情绪度的每一目标情绪度的产生过程为:所述云端服务器300从所述生物体震信息提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;对每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息,并根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,查询得到每一目标情绪度。
请参考图5,本实施例的云端服务器300通过网络连接若干显示终端。所述云端服务器300包括第二处理器310与第二存储器320,所述第二处理器310分别与所述第二存储器320及所述显示终端200-1,…,显示终端200-n连接,该连接可以为通信连接。
所述第二存储器320存储有可被所述第二处理器310执行的指令,所述指令被所述第二处理器310执行,以使所述第二处理器执行:接收所述显示终端转发的所述生物体震信息;从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息;根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度。
其中,为了从所述生物体震信息中提取准确的用户参数,更准确反映用户的情绪,所述第二处理器310还用于执行:从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。通过呼吸信息分析出用户更多的心冲击特征信息,从而使得更准确反映用户的情绪。其中,所述第二处理器310执行从所述生物体震信息中提取心冲击信息,具体包括:从所述生物体震信息中分离出呼吸信息;对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
并且,为了从所述生物体震信息中提取准确的用户参数,进一步的更准确反映用户的情绪,所述第二处理器还用于执行:从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧;对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度;利用所述第三情绪度修正所述目标情绪度。
所述第二处理器310还用于执行:根据请求的直播频道对用户进行分组。具体的,用户可以通过所述云端服务器300进行用户注册,所述云端服务器300再根据请求的直播频道对注册的用户进行分组。用户通过所述显示终端200-1请求直播视频数据时,所述第二处理器310向请求显示终端200-1提供直播视频数据,并将相同直播频道分组中所有用户的目标情绪度发送至所述显示终端200-1。以便于所述显示终端200-1创建直播界面400,并在视频窗口410播放直播视频数据,同时在该滚动显示窗口420实时滚动显示所有用户对直播内容的兴奋度。
本实施例的用户情绪显示系统,将情绪度或者兴奋度与直播平台进行联动,实现在直播平台上实时滚动显示同一直播频道的观看用户的真实情感响应度,内容粘性大,数据更真实。
实施例2
请参考图6,为本申请实施例提供的用户情绪显示方法的流程示意图,所述用户情绪显示方法应用于用户情绪显示系统,所述用户情绪显示系统包括可穿戴装置、显示终端以及云端服务器。所述用户情绪显示方法可由所述云端服务器、可穿戴设备或显示终端执行,本申请实施例不做限定。
其中,所述用户情绪显示方法主要包括以下步骤:
步骤101:获取所述可穿戴装置采集的生物体震信息。
其中,所述生物体震信息可以为所述可穿戴装置基于时域采集的用户的生物体震信息。所述可穿戴装置采集到所述生物体震信息后,可将所述生物体震信息发送至所述显示终端,再又所述显示终端转发所述生物体震信息,所述云端服务器便可接收所述显示终端转发的所述生物体震信息,从而获取得到所述生物体震信息。
步骤102:从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧。
所述从所述生物体震信息中提取心冲击信息,具体包括:从该生物体震信息中分离出呼吸信息;对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
步骤103:对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息。
其中,所述心冲击特征信息可以为HRV的频域和时域参数:RRI、SDNN、HF 、LF、LF/HF和TP等。
步骤104:根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,使所述显示终端实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度。
所述云端服务器确定目标情绪度后,将所述目标情绪度发送至所述显示终端,以便可在所述显示终端实时显示所述目标情绪度,从而使得持有所述显示终端可了解目标用户的情绪度。所述使所述显示终端实时显示所述目标情绪度,具体包括:根据请求的直播频道对用户进行分组;获取相同直播频道下所有用户的目标情绪度,并记录时间戳;在直播界面设置视频窗口以及滚动显示窗口; 在所述滚动显示窗口中,根据时间戳滚动显示接收的目标情绪度对应的文字或者动图。其中,为了合理显示接收的目标情绪度对应的文字或者动图,可以采用先接收的先显示的方式显示于所述滚动显示窗口。
请参考图7,为了从所述生物体震信息中提取准确的用户参数,在实施图6所示方法步骤将所述目标情绪度发送至所述显示终端,使所述显示终端实时显示所述目标情绪度后,还可从所述生物体震信息中提取其它心冲击特征信息来修正该目标情绪度以获取更准确更真实的表述。比如,从该生物体震信息中提取体动信息和呼吸信息。具体包括:
步骤202:从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;
步骤203:对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;
步骤204:基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。
进一步地,该方法还包括,
步骤302:从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧;
步骤303:对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度;
步骤304:利用所述第三情绪度修正所述目标情绪度。
需要说明的是,在一些实施例中,所述用户情绪显示方法还可由可穿戴装置或显示终端执行。当所述用户情绪显示方法被所述可穿戴装置执行时,所述步骤101获取所述可穿戴装置采集的生物体震信息表示为:采集所述生物体震信息;当所述用户情绪显示方法被所述显示终端执行时,所述步骤104根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,使所述显示终端实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度表示为:根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度。
本申请的用户情绪显示方法和系统,通过分析由硬件传感器检测的生物体震信息,并推导出相同直播频道下所有用户的情绪度和兴奋度,将情绪度或者兴奋度与直播平台进行联动,实现在直播平台上滚动显示同一直播频道的观看用户的真实情感响应度,数据真实。本实施例并将情绪兴奋程度通过屏幕的部分空间实时滚动的显示出来, 数据真实,可互动性更强。可实现直播平台对采集的反映用户真实情绪的用户参数进行分析用于指导直播内容的制作,使得直播内容更受欢迎,或者做节目分类,从用户真实体验的角度细化直播平台的直播内容。
实施例3
请参考图8,为本申请实施例提供的用户情绪显示装置的示意图。所述用户情绪显示装置可以配置在云端服务器中。
参照图8,所述用户情绪显示装置80包括:
生物体震信息获取单元801,用于获取可穿戴装置采集的生物体震信息。
其中,所述生物体震信息可以为所述可穿戴装置基于时域采集的用户的生物体震信息。所述可穿戴装置采集到所述生物体震信息后,可将所述生物体震信息发送至所述显示终端,再又所述显示终端转发所述生物体震信息,所述生物体震信息获取单元801便可接收所述显示终端转发的所述生物体震信息,从而获取得到所述生物体震信息。
心冲击信息提取单元802,用于从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧。
所述心冲击信息提取单元802从所述生物体震信息中提取心冲击信息,具体包括:从该生物体震信息中分离出呼吸信息;对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
心冲击特征信息提取单元803,用于对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息。
其中,所述心冲击特征信息可以为HRV的频域和时域参数:RRI、SDNN、HF 、LF、LF/HF和TP等。
目标情绪度确定单元804,用于根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至显示终端,使所述显示终端实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度。
所述目标情绪度确定单元804确定目标情绪度后,将所述目标情绪度发送至所述显示终端,以便可在所述显示终端实时显示所述目标情绪度,从而使得持有所述显示终端可了解目标用户的情绪度。所述目标情绪度确定单元804使所述显示终端实时显示所述目标情绪度,具体包括:根据请求的直播频道对用户进行分组;获取相同直播频道下所有用户的目标情绪度,并记录时间戳;在直播界面设置视频窗口以及滚动显示窗口; 在所述滚动显示窗口中,根据时间戳滚动显示接收的目标情绪度对应的文字或者动图。其中,为了合理显示接收的目标情绪度对应的文字或者动图,可以采用先接收的先显示的方式显示于所述滚动显示窗口。
在本申请实施例中,所述用户情绪显示装置80还包括:
呼吸信息提取单元805,用于从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧。
第二情绪度确定单元806,用于对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度。
第一修正单元807,用于基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。
体动信息提取单元808,用于从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧。
第三情绪度确定单元809,用于对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度。
第二修正单元810,用于利用所述第三情绪度修正所述目标情绪度。
需要说明的是,在本申请实施例中,所述用户情绪显示装置80可执行本申请实施例2提供的用户情绪显示方法,具备执行方法相应的功能模块和有益效果。未在用户情绪显示装置80的实施例中详尽描述的技术细节,可参见本申请实施例2提供的用户情绪显示方法。
实施例4
图9是本申请实施例提供的用户情绪显示设备硬件结构图,如图9所示,所述音视频测试设备90包括:处理器901、存储器902、通信接口(图未示)和总线。所述处理器901、所述存储器902和所述通信接口通过所述总线连接并完成相互间的通信;所述存储器存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以执行如上所述的用户情绪显示方法。
一个或多个处理器901以及存储器902,图9中以一个处理器901为例。
处理器901、存储器902和所述通信接口可以通过总线或者其他方式连接,图9中以通过总线连接为例。
存储器902作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行指令。处理器901通过运行存储在存储器902中的非易失性软件程序、指令,从而执行用户情绪显示设备90的各种功能应用以及数据处理,即实现所述方法实施例的用户情绪显示方法。
存储器902可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据用户情绪显示设备90使用所创建的数据等。此外,存储器902可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器902可选包括相对于处理器901远程设置的存储器,这些远程存储器可以通过网络连接至用户情绪显示设备90。所述网络的实施例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个指令存储在所述存储器902中,当被所述一个或者多个处理器901执行时,执行所述任意方法实施例的用户情绪显示方法,例如,执行以上描述的图6中的方法步骤101至步骤104。
所述用户情绪显示设备90可执行本申请实施例所提供的用户情绪显示方法,具备执行方法相应的功能模块和有益效果。未在用户情绪显示设备实施例中详尽描述的技术细节,可参见本申请实施例所提供的用户情绪显示方法。
本申请实施例提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被用户情绪显示设备90执行时,使所述用户情绪显示设备90执行如上所述的用户情绪显示方法。例如,执行以上描述的图6中的方法步骤101至步骤104。
本申请实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,执行如上所述的用户情绪显示方法。例如,执行以上描述的图6中的方法步骤101至步骤104。
综上所述,本申请实施例提供的用户情绪显示方法、系统及户情绪显示设备,在可穿戴装置上安装体震传感器,比如加速度传感器、陀螺仪、压阻式传感器或者生物电极,使可穿戴装置设计简单、使用方便;该可穿戴装置基于时域获取用户的生物体震信息后,连接可穿戴装置的显示终端将获取的生物体震信息转发至云端服务器,该云端服务器从生物体震信息中提取心冲击信息,再从该心冲击信息中提取出心冲击特征信息,基于该心冲击特征信息查询确定佩戴可穿戴装置的用户当前真实的情绪度,并随着直播内容同步显示表征该情绪度的文字或者动态图像,为直播平台提供真实的视频体验反馈和交互基础,提升用户粘性以及提升用户体验;本实施例的用户情绪显示方法、系统及户情绪显示设备,将情绪度或者兴奋度与直播平台进行联动,使直播平台的直播界面更具娱乐性、可观赏性、互动性、氛围更强烈;并且,本实施例用户情绪显示方法、系统及户情绪显示设备,基于体震传感器采集的真实生物体震信息,反映用户真实兴奋度的情绪度大数据,可使内向不善于表达真实想法的用户把自己的兴奋度直接展示出来;而且,通过搜集、分析以及整理若干直播用户产生的情绪度文字或者动态图像生成参考信息,可优化直播内容,比如做直播用户人群节目分类或者帮助直播平台细化服务内容。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (12)

  1. 一种用户情绪显示方法,其特征在于,所述用户情绪显示方法应用于用户情绪显示系统,所述用户情绪显示系统包括可穿戴装置以及显示终端,所述方法包括:
    获取所述可穿戴装置采集的生物体震信息;
    从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;
    对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息;
    根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,使所述显示终端实时显示所述目标情绪度,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;
    对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;
    基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧;
    对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度;
    利用所述第三情绪度修正所述目标情绪度。
  4. 根据权利要求1-3任意一项所述的方法,其特征在于,所述从所述生物体震信息中提取心冲击信息,包括:
    从所述生物体震信息中分离出呼吸信息;
    对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
  5. 根据权利要求4所述的方法,其特征在于,所述使所述显示终端实时显示所述目标情绪度,包括:
    根据请求的直播频道对用户进行分组;
    获取相同直播频道下所有用户的目标情绪度,并记录与所述目标情绪度对应的时间戳;
    在所述显示终端显示的直播界面中设置视频窗口以及滚动显示窗口;
    在所述滚动显示窗口中,根据所述时间戳滚动显示接收的所述目标情绪度对应的文字或者动图。
  6. 一种用户情绪显示设备,其特征在于,包括:处理器、存储器、通信接口和总线;所述处理器、所述存储器和所述通信接口通过所述总线连接并完成相互间的通信;所述存储器存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以执行如权利要求1-5任一项所述的方法。
  7. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-5任一项所述的方法。
  8. 一种用户情绪显示系统,其特征在于,包括可穿戴装置、显示终端以及云端服务器,所述可穿戴装置连接显示终端,所述显示终端连接云端服务器;
    所述可穿戴装置用于采集用户的生物体震信息;
    所述显示终端包括第一处理器与第一存储器,所述第一处理器分别与所述第一存储器、所述可穿戴装置及所述云端服务器连接,所述第一存储器存储有可被所述第一处理器执行的指令,所述指令被所述第一处理器执行,以使所述第一处理器执行:获取所述可穿戴装置采集的用户的生物体震信息,并将所述生物体震信息转发至所述云端服务器;
    所述云端服务器包括第二处理器与第二存储器,所述第二处理器分别与所述第二存储器及所述显示终端连接,所述第二存储器存储有可被所述第二处理器执行的指令,所述指令被所述第二处理器执行,以使所述第二处理器执行:接收所述显示终端转发的所述生物体震信息;从所述生物体震信息中提取心冲击信息,并根据所述心冲击信息建立心冲击信息帧;对所述心冲击信息帧中的每一心冲击信息帧进行频域分析提取所述心冲击信息的心冲击特征信息;根据所述心冲击特征信息查询心冲击特征信息与情绪度对照库,确定目标情绪度,并将所述目标情绪度发送至所述显示终端,所述目标情绪度为与所述心冲击特征信息对应的第一情绪度;
    所述第一处理器还用于执行控制所述显示终端的显示器实时显示所述目标情绪度。
  9. 根据权利要求8所述的用户情绪显示系统,其特征在于,所述第二处理器还用于执行:
    从所述生物体震信息中提取呼吸信息,并根据所述呼吸信息建立呼吸信息帧;
    对所述呼吸信息帧中的每一呼吸信息帧提取所述呼吸信息的呼吸特征信息,根据所述呼吸特征信息查询呼吸特征信息与情绪度对照库,确定与所述呼吸特征信息对应的第二情绪度;
    基于所述第一情绪度与所述第二情绪度修正所述目标情绪度。
  10. 根据权利要求9所述的用户情绪显示系统,其特征在于,所述第二处理器还用于执行:
    从所述生物体震信息中提取体动信息,并根据所述体动信息建立体动信息帧;
    对所述体动信息帧中的每一体动信息帧提取所述体动信息的体动特征信息,根据所述体动特征信息查询体动特征信息与情绪度对照库,确定与所述体动特征信息对应的第三情绪度;
    利用所述第三情绪度修正所述目标情绪度。
  11. 根据权利要求8-10任意一项所述的用户情绪显示系统,其特征在于,所述第二处理器执行从所述生物体震信息中提取心冲击信息,包括:
    从所述生物体震信息中分离出呼吸信息;
    对分离出呼吸信息后的所述生物体震信息进行滤波降噪处理,再减去所述呼吸信息后获取所述心冲击信息。
  12. 根据权利要求11所述的用户情绪显示系统,其特征在于,
    所述第二处理器还用于执行:根据请求的直播频道对用户进行分组;
    所述第一处理器执行实时显示所述目标情绪度包括:获取相同直播频道下所有用户的目标情绪度,并记录与所述目标情绪度对应的时间戳;在所述显示终端显示的直播界面中设置视频窗口以及滚动显示窗口;在所述滚动显示窗口中,根据所述时间戳控制所述显示终端的显示器滚动显示接收的所述目标情绪度对应的文字或者动图。
PCT/CN2017/120254 2017-12-29 2017-12-29 一种用户情绪显示方法、系统及用户情绪显示设备 WO2019127523A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/120254 WO2019127523A1 (zh) 2017-12-29 2017-12-29 一种用户情绪显示方法、系统及用户情绪显示设备
CN201780009005.2A CN108702523B (zh) 2017-12-29 2017-12-29 一种用户情绪显示方法、系统及用户情绪显示设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/120254 WO2019127523A1 (zh) 2017-12-29 2017-12-29 一种用户情绪显示方法、系统及用户情绪显示设备

Publications (1)

Publication Number Publication Date
WO2019127523A1 true WO2019127523A1 (zh) 2019-07-04

Family

ID=63844125

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120254 WO2019127523A1 (zh) 2017-12-29 2017-12-29 一种用户情绪显示方法、系统及用户情绪显示设备

Country Status (2)

Country Link
CN (1) CN108702523B (zh)
WO (1) WO2019127523A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114598896A (zh) * 2022-02-17 2022-06-07 北京达佳互联信息技术有限公司 网络直播方法、装置、电子设备以及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109480813B (zh) * 2018-11-06 2020-10-20 北京理工大学 一种基于bcg原理的非接触式心率检测方法
CN110677685B (zh) * 2019-09-06 2021-08-31 腾讯科技(深圳)有限公司 网络直播显示方法及装置
CN112820323B (zh) * 2020-12-29 2023-06-16 平安银行股份有限公司 基于客户语音进行响应队列优先级调整方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090133064A1 (en) * 2007-11-19 2009-05-21 Tetsuo Maruyama Information providing device, information acquisition terminal, broadcast receiving terminal, information providing system, information providing method, and program
CN104905803A (zh) * 2015-07-01 2015-09-16 京东方科技集团股份有限公司 可穿戴电子设备及其情绪监控方法
CN106175727A (zh) * 2016-07-25 2016-12-07 广东小天才科技有限公司 一种应用于可穿戴设备的表情推送方法及可穿戴设备
CN107456218A (zh) * 2017-09-05 2017-12-12 清华大学深圳研究生院 一种情绪感测系统及可穿戴设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101799702B1 (ko) * 2016-12-02 2017-12-20 주식회사 라투인 만물 인터넷을 융합한 생활 밀착형 가상현실 비즈니스 플랫폼과 그 운영방법
CN107197384B (zh) * 2017-05-27 2019-08-02 北京光年无限科技有限公司 应用于视频直播平台的虚拟机器人多模态交互方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090133064A1 (en) * 2007-11-19 2009-05-21 Tetsuo Maruyama Information providing device, information acquisition terminal, broadcast receiving terminal, information providing system, information providing method, and program
CN104905803A (zh) * 2015-07-01 2015-09-16 京东方科技集团股份有限公司 可穿戴电子设备及其情绪监控方法
CN106175727A (zh) * 2016-07-25 2016-12-07 广东小天才科技有限公司 一种应用于可穿戴设备的表情推送方法及可穿戴设备
CN107456218A (zh) * 2017-09-05 2017-12-12 清华大学深圳研究生院 一种情绪感测系统及可穿戴设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114598896A (zh) * 2022-02-17 2022-06-07 北京达佳互联信息技术有限公司 网络直播方法、装置、电子设备以及存储介质

Also Published As

Publication number Publication date
CN108702523B (zh) 2021-04-02
CN108702523A (zh) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2019127523A1 (zh) 一种用户情绪显示方法、系统及用户情绪显示设备
US10593167B2 (en) Crowd-based haptics
US8758020B2 (en) Periodic evaluation and telerehabilitation systems and methods
JP6526820B2 (ja) 乗り物酔いの監視及び、酔いを抑えるための追加音の付与
US10459972B2 (en) Biometric-music interaction methods and systems
US9467673B2 (en) Method, system, and computer-readable memory for rhythm visualization
US20150082167A1 (en) Intelligent device mode shifting based on activity
US20130337975A1 (en) Merchandizing, socializing, and/or gaming via a personal wellness device and/or a personal wellness platform
WO2020224322A1 (zh) 音乐文件的处理方法、装置、终端及存储介质
WO2016092912A1 (ja) プログラム及び情報処理システム
US20130120114A1 (en) Biofeedback control system and method for human-machine interface
CN108579060B (zh) 一种运动系统及其应用方法
US20150375106A1 (en) Implementing user motion games
CN109908551A (zh) 一种单车锻炼方法及装置
CN113694343A (zh) 一种基于vr技术的沉浸式抗压力心理训练系统及方法
JP6150935B1 (ja) 情報処理システム、情報処理方法、および情報処理プログラム
JP2020505861A (ja) 動きの取り込み、及び記録オーディオ/ビデオとの動きの同期
CN215875885U (zh) 一种基于vr技术的沉浸式抗压力心理训练系统
CN108777171A (zh) 一种评测感觉状态的方法及装置
Cruz et al. Monitoring physiology and behavior using Android in phobias
CN209203292U (zh) 一种用户情绪显示系统
CN108766574A (zh) 一种评测定向力状态的方法及装置
EP4353152A1 (en) Medical image acquisition unit assistance apparatus
CN108764364A (zh) 一种评测应激反应状态的方法及装置
CN104905776A (zh) 一种应用体感游戏的老人健康检测系统及其检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936832

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936832

Country of ref document: EP

Kind code of ref document: A1