WO2021253217A1 - Procédé d'analyse d'état d'utilisateur et dispositif associé - Google Patents

Procédé d'analyse d'état d'utilisateur et dispositif associé Download PDF

Info

Publication number
WO2021253217A1
WO2021253217A1 PCT/CN2020/096321 CN2020096321W WO2021253217A1 WO 2021253217 A1 WO2021253217 A1 WO 2021253217A1 CN 2020096321 W CN2020096321 W CN 2020096321W WO 2021253217 A1 WO2021253217 A1 WO 2021253217A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
state
sub
status
Prior art date
Application number
PCT/CN2020/096321
Other languages
English (en)
Chinese (zh)
Inventor
曾浩军
陈天镜
Original Assignee
曾浩军
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 曾浩军 filed Critical 曾浩军
Priority to PCT/CN2020/096321 priority Critical patent/WO2021253217A1/fr
Publication of WO2021253217A1 publication Critical patent/WO2021253217A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons

Definitions

  • This application relates to the field of computer technology, in particular to a user state analysis method and related equipment.
  • the embodiment of the application provides a user state analysis method and related equipment, which can obtain and analyze complete monitoring information in a timely manner, and improve the timeliness and accuracy of the obtained user state.
  • the first aspect of the embodiments of the present application provides a user status analysis method, which is applied to a server, and the user status analysis method includes:
  • Obtain user identity information and monitoring information sent by the terminal device where the monitoring information includes at least two of the following: image information, voice information, and user physiological information;
  • the state of the first sub-user corresponding to the image information is determined according to the state recognition parameter, and/or the state of the second sub-user corresponding to the voice information is determined according to the state recognition parameter, and/or according to the state
  • the identification parameter determines the third sub-user status corresponding to the user's physiological information
  • the user status is determined according to the first sub-user status, and/or the second sub-user status, and/or the third sub-user status.
  • the image information includes at least one image sequence and time information corresponding to the image sequence
  • the state recognition parameter includes an image recognition model
  • the state recognition The parameter determining the first sub-user status corresponding to the image information includes:
  • the at least one image sequence and the time information corresponding to the image sequence are input into the image recognition model to obtain the first sub-user status output by the image recognition model.
  • the state recognition parameter includes a text recognition model and an intonation recognition model
  • the determining the second sub-user state corresponding to the voice information according to the state recognition parameter includes :
  • the second sub-user status is determined according to the text recognition result and the intonation recognition result.
  • the physiological information includes at least one physiological information sequence and time information corresponding to the physiological information sequence
  • the state recognition parameter includes a physiological information recognition model
  • the The state recognition parameter determining the third sub-user state corresponding to the user's physiological information includes:
  • the at least one physiological information sequence and the time information corresponding to the physiological information sequence are input into the physiological information recognition model to obtain the third sub-user status output by the physiological information recognition model.
  • the user State analysis methods after the user status is determined according to the first sub-user status, and/or the second sub-user status, and/or the third sub-user status, the user State analysis methods also include:
  • the monitoring information and/or user status corresponding to the query information are sent to the terminal device.
  • the second aspect of the embodiments of the present application provides a user status analysis method, which is applied to a terminal device, and the user status analysis method includes:
  • monitoring information includes at least two of the following: image information, voice information, and user physiological information;
  • the sending the user identity information and the monitoring information to the server includes:
  • the user identity information and the monitoring information are converted into SPI data packets, and the SPI data packets are sent to the server.
  • the third aspect of the embodiments of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program when the computer program is executed. The method as described in the first aspect above.
  • the fourth aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • a terminal device including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program, Implement the method as described in the second aspect above.
  • the fifth aspect of the embodiments of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it implements the method or Implement the method as described in the second aspect above.
  • the sixth aspect of the embodiments of the present application provides a computer program product that, when the computer program product runs on a terminal device, causes the terminal device to execute the method described in the first aspect above or implement the method described in the second aspect above .
  • the embodiment of the present application has the beneficial effect that the monitoring information obtained by the server from the terminal device includes at least two types of image information, voice information, and user physiological information.
  • the monitoring information of the status is more complete, and therefore, the accuracy of the determined user status is improved.
  • users since users generally carry terminal devices with them, it can ensure that the server obtains the required monitoring information and user identity information from the terminal devices in a timely manner, thereby ensuring the timeliness of the determined user status.
  • Fig. 1 is a schematic diagram of a user status analysis system provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of the implementation process of a user status analysis method provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of the implementation process of a user status analysis method provided by another embodiment of the present application.
  • FIG. 4 is a schematic diagram of the implementation process of a user status analysis method provided by another embodiment of the present application.
  • Figure 5 is a schematic diagram of a server provided by an application embodiment
  • Fig. 6 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a user state analysis system provided by an embodiment of the application.
  • the user state analysis system includes a server 100 and a terminal device 200.
  • the terminal device 200 is used to obtain image information, voice information, and physiological information. Information, sending at least two of image information, voice information, and physiological information and user identity information to the server 100.
  • the monitoring information can be collected by a monitoring device that is in communication with the terminal device 200 and then sent to the terminal device 200, or it can be collected by the terminal device 200.
  • image information can be collected by a camera on the terminal device 200
  • voice information can be collected by a camera on the terminal device 200.
  • the microphone on the terminal device 200 collects physiological information, including heart rate information, blood oxygen information, and muscle state information.
  • the heart rate information and blood oxygen information can be collected by the heart rate sensor on the terminal device 200, and the muscle state information can be collected by the terminal device 200.
  • the server 100 determines the state recognition parameter according to the user identity information, determines the first sub-user state corresponding to the image information according to the state recognition parameter, and/or determines the second sub-user state corresponding to the voice information according to the state recognition parameter, and/or according to
  • the state identification parameter determines the third sub-user state corresponding to the user's physiological information, and then determines the user state according to the first sub-user state, and/or the second sub-user state, and/or the third sub-user state.
  • the monitoring information obtained by the server from the terminal device includes at least two of image information, voice information, and user physiological information
  • the subsequent monitoring information used to determine the user status with the user identity information is more complete, thereby improving the accuracy of the determined user status sex.
  • users generally carry terminal devices with them it can ensure that the server obtains the required monitoring information and user identity information from the terminal devices in a timely manner, thereby ensuring the timeliness of the determined user status.
  • FIG. 2 is a flowchart of the user status analysis method provided by an embodiment of the present application.
  • the execution body of the method is a server, and the method includes:
  • S101 Acquire user identity information and monitoring information sent by a terminal device, where the monitoring information includes at least two of the following: image information, voice information, and user physiological information.
  • the user identity information can be the user’s fingerprint, face, account number and other information.
  • the terminal device After the user completes the registration on the server through the terminal device, the terminal device sends the user identity information entered by the user to the server to log in to the server, and the collected monitoring The information is sent to the server.
  • the image information includes pictures and videos
  • the voice information is the voice around the terminal device collected synchronously with the image information
  • the user's physiological information includes heart rate information, blood oxygen information, and muscle state information.
  • S102 Determine the status recognition parameter according to the user identity information.
  • the server pre-stores the corresponding relationship between the user identity information and the state recognition parameter, and when the server obtains the user identity information, it determines the corresponding state recognition parameter according to the corresponding relationship.
  • the user identity information is associated with the occupation type, and the server determines the occupation type according to the user identity information, and then determines the state recognition parameter corresponding to the occupation type according to the occupation type.
  • the state recognition parameters include a state recognition model corresponding to user identity information.
  • the state recognition model uses the user's image information, voice information, user physiological information, and corresponding user state as training samples, and uses machine The learning algorithm is obtained after training the classification model. Input the user's image information, voice information, and user physiological information into the state recognition model to obtain the user state output by the state recognition model.
  • the user state includes the user's first sub-user state, second sub-user state, and third sub-user state. At least one of.
  • the state recognition parameters include an image recognition model, a voice recognition model, and a physiological information recognition model corresponding to the user's identity information. The image recognition model is used to output the first sub-user according to the user's image information.
  • the voice recognition model is used to output the second sub-user state according to the user's voice information
  • the physiological information recognition model is used to output the third sub-user state according to the user's physiological information.
  • the state recognition parameters include state parameters corresponding to user identity information
  • the state parameters include image parameters corresponding to the preset first state, voice parameters corresponding to the preset second state, and The physiological parameters corresponding to the third state are preset.
  • the server is used to determine the user's first sub-user status according to the image information sent by the terminal device and the image parameters corresponding to the preset first user status, and is used to determine the user's first sub-user status according to the voice information sent by the terminal device and the voice corresponding to the preset second status
  • the parameter determines the second sub-user state of the user, and is used to determine the third sub-user state of the user according to the physiological information of the user sent by the terminal device and the physiological parameter corresponding to the preset third state.
  • S103 Determine the first sub-user state corresponding to the image information according to the state recognition parameter, and/or determine the second sub-user state corresponding to the voice information according to the state recognition parameter, and/or determine the state of the second sub-user corresponding to the voice information according to the state recognition parameter.
  • the state recognition parameter determines the third sub-user state corresponding to the user's physiological information.
  • the state recognition parameters include an image recognition model, a speech recognition model, and a physiological information recognition model corresponding to the user's identity information.
  • the server obtains the image information sent by the terminal device, it divides the image information into at least one image sequence, each image sequence is a frame of image or a video segment, and each image sequence corresponds to a piece of time information, which is the time information of the image sequence Collection moment. After obtaining the image sequence and the time information corresponding to the image sequence, input at least one image sequence and the time information corresponding to the image sequence into the image recognition model to obtain the first sub-user status output by the image recognition model.
  • the image information is generally the user's face image information
  • the first sub-user state may be: whether the user is smiling, whether the expression is painful, and so on. Since the first sub-user state is obtained by the image recognition model according to the image sequence and the time information corresponding to the image sequence, that is, the first sub-user state reflects the change state of the image information over time, that is, it can reflect the change state of the face image over time. Thereby, the accuracy of the obtained first sub-user status is improved.
  • the voice recognition model includes a text recognition model and an intonation recognition model. If the server obtains the voice information sent by the terminal device, it converts the voice information into text data, and then enters the text data into the text recognition model to obtain the text recognition result output by the text recognition model . The server extracts the intonation characteristics of the voice information, and then inputs the intonation characteristics into the intonation recognition model to obtain the intonation recognition result output by the intonation recognition model. After obtaining the text recognition result and the intonation recognition result, the server determines the second sub-user status according to the text recognition result and the intonation recognition result.
  • the text recognition result and the intonation recognition result are both probabilities, and the two probabilities are weighted and averaged to obtain the final probability, and the second sub-user state is determined according to the final probability.
  • the second sub-user state may be whether the speech is intense, whether the emotion is agitated, and so on. Since the second sub-user status is obtained based on the text data and intonation characteristics, the second sub-user status of the user can be more comprehensively reflected.
  • the server obtains the user's physiological information sent by the terminal device, it divides the user's physiological information into at least one physiological information sequence.
  • One physiological information sequence is the user's heart rate, blood oxygen or muscle state within a preset period of time, and one physiological information sequence corresponds to a time Information, the time information is the collection time of the corresponding physiological information sequence.
  • the third sub-user status includes whether the heart rate is stable, whether the muscles are fatigued, and so on. Since the third sub-user state is obtained according to the physiological information sequence and the time sequence corresponding to the physiological information, it reflects the change state of the user's physiological information over time, so that the accuracy of the obtained third sub-user state is higher.
  • the server obtains the image information sent by the terminal device, it determines whether the first sub-user status of the user is the preset first state according to the image information and the image parameters corresponding to the preset first state. state. If the server obtains the voice information sent by the terminal device, it determines whether the second sub-user state of the user is the preset second state according to the voice information and the voice parameter corresponding to the preset second state. If the server obtains the user's physiological information sent by the terminal device, it determines whether the user's third sub-user state is the preset third state according to the user's physiological information and the physiological parameter corresponding to the preset third state.
  • the server inputs the acquired image information and voice information, or the image information and user physiological information into the state recognition model to obtain the first sub-user state output by the state recognition model; the server will obtain the voice Information and image information, or voice information and user physiological information are input into the state recognition model to obtain the second sub-user state output by the state recognition model; the server will obtain user physiological information and image information, or input user physiological information and voice information into the state
  • the recognition model is used to obtain the third sub-user status output by the status recognition model.
  • S104 Determine the user status according to the first sub-user status, and/or the second sub-user status, and/or the third sub-user status.
  • At least one of the first sub-user state, the second sub-user state, and the third sub-user state of the user is combined with the preset first sub-user state, the preset second sub-user state, and the preset
  • the third sub-user status is compared to determine whether the user status is a preset user status.
  • the server determines whether the user state is a preset state after determining the user state, and if the user state is not the preset state, instructs the terminal device to send out corresponding prompt information.
  • the prompt information can be voice, text, or alarm.
  • the status identification parameters determined according to the user identity information, the user status used for identification refers to the emotional fluctuation state, and the server according to the first sub-user status, the second sub-user status, and the second sub-user status.
  • the user state and the third sub-user state determine the mood swing state. Specifically, after the server determines the first sub-user state, the second sub-user state, and the third sub-user state, if the first sub-user state is consistent with the preset first state, the second sub-user state is the same as the preset second state. If the state of the third sub-user is consistent with the preset third state, it is determined that the mood fluctuation state of the user is within the preset range, and the terminal device is instructed to output a prompt meeting the monitoring requirements.
  • the status identification parameters are determined according to user identity information.
  • the user status used for identification refers to the smile index.
  • the third sub-user status determines the smile index. For example, if the server determines that the state of the first sub-user is smiling, the determined state of the second sub-user is gentle intonation, and the determined state of the third sub-user is stable heart rate, it is determined that the user’s smile index is 100, and the terminal is instructed The device outputs prompts that meet the monitoring requirements.
  • the status identification parameters are determined according to the user identity information.
  • the user status used for identification refers to the fatigue status, and the server determines it according to either the first sub-user status or the third sub-user status State of fatigue. For example, if the server determines that the state of the first sub-user is sleepy, or the determined state of the third sub-user is muscle fatigue, it determines that the user is in a fatigued state, and instructs the terminal device to output a prompt reminding the driver to rest.
  • the status recognition parameters determined according to the user refers to whether the status is painful, and the server is based on the status of the first sub-user, Any one of the sub-user state and the third sub-user state determines whether it is a painful state. For example, if the server determines that the state of the first sub-user is a painful expression, or the determined state of the second sub-user is that the tone fluctuates drastically, or the determined state of the third sub-user is that the heart rate is unstable, then it is determined that the user's state is a painful state , Instruct the terminal equipment to output a prompt reminding the monitoring personnel to take corresponding measures.
  • the server stores the monitoring information sent by the terminal device and the user status determined based on the monitoring information.
  • the server obtains the query information sent by the terminal device, it will correspond to the user identity information corresponding to the query information.
  • the monitoring information and/or user status are sent to the terminal device.
  • the monitoring information obtained by the server from the terminal device includes at least two of image information, voice information, and user physiological information, that is, the subsequent monitoring information used to determine the user status with the user identity information is more complete, and therefore, the determination is improved.
  • the accuracy of the user status because the monitoring information is obtained from the terminal device, and the terminal device is convenient for the user to carry, it can ensure that the server obtains the required monitoring information and user identity information from the terminal device in a timely manner, thereby ensuring that the determined user status is maintained. Timeliness.
  • Fig. 3 is a flowchart of a user status analysis method provided by another embodiment of the application.
  • the execution subject of the method is a terminal device, and the method includes:
  • S201 Acquire user identity information and monitoring information, where the monitoring information includes at least two of the following: image information, voice information, and user physiological information.
  • the terminal device acquires monitoring information sent by the monitoring device, or the terminal device collects monitoring information.
  • image information is collected by a camera on the terminal device, including pictures and videos
  • voice information is collected by a microphone on the terminal device
  • user physiological information Including heart rate information, blood oxygen information, and muscle state information.
  • Heart rate information and blood oxygen information can be collected by a heart rate sensor on the terminal device
  • muscle state information can be collected by a body surface sensor on the terminal device.
  • S202 Send the user identity information and the monitoring information to the server to instruct the server to determine the state recognition parameter according to the user identity information, and determine the first sub-user state corresponding to the image information according to the state recognition parameter , And/or determine the second sub-user state corresponding to the voice information according to the state recognition parameter, and/or determine the third sub-user state corresponding to the user's physiological information according to the state recognition parameter, and The first sub-user status, and/or the second sub-user status, and/or the third sub-user status determine the user status.
  • the terminal device After the terminal device receives the monitoring information, it converts the monitoring information into a serial peripheral interface (Serial Peripheral Interface, SPI) data packet, and sends the SPI data packet to the server through the communication network, thereby saving Data takes up space and improves the efficiency of data transmission.
  • the communication network may be a network such as WIFI, 4G, and 5G.
  • the terminal device encrypts the SPI data packet and sends it to the server, and the server decrypts the SPI data packet according to a preset decryption rule to obtain monitoring information, thereby improving the security of data transmission.
  • the terminal device receives the user status sent by the server, and sends corresponding prompt information according to the user status.
  • the monitoring information collected by the terminal device includes at least two types of image information, voice information, and user physiological information. That is, the monitoring information used by the subsequent server to determine the user status with the user identity information is more complete, thus improving the number of users identified. The accuracy of the state. And since users generally carry terminal devices with them, it can ensure that the terminal devices obtain the monitoring information and user identity information required by the server in a timely manner, thereby ensuring the timeliness of the user status determined by the server.
  • Fig. 4 is a specific flowchart of a user status analysis method provided by an embodiment of the application, and the method includes:
  • the terminal device obtains user identity information and monitoring information, where the monitoring information includes at least two of the following: image information, voice information, and user physiological information.
  • the terminal device sends the user identity information and the monitoring information to the server.
  • S303 The server determines the state identification parameter according to the user identity information.
  • the server determines the state of the first sub-user corresponding to the image information according to the state recognition parameter, and/or determines the state of the second sub-user corresponding to the voice information according to the state recognition parameter, and/or according to The state recognition parameter determines the third sub-user state corresponding to the user's physiological information, and determines the user state according to the first sub-user state, and/or the second sub-user state, and/or the third sub-user state.
  • S305 The server sends the user status to the terminal device.
  • the monitoring information obtained by the server from the terminal device includes at least two of image information, voice information, and user physiological information, that is, the subsequent monitoring information used to determine the user status with the user identity information is more complete, and therefore, the determination is improved.
  • the accuracy of the user status because the monitoring information is obtained from the terminal device, and the terminal device is convenient for the user to carry, it can ensure that the server obtains the required monitoring information and user identity information from the terminal device in a timely manner, thereby ensuring that the determined user status is maintained. Timeliness.
  • Fig. 5 is a schematic diagram of a server provided by an embodiment of the present application.
  • the server of this embodiment includes a processor 11, a memory 12, and a computer program 13 stored in the memory 12 and running on the processor 11.
  • the processor 11 executes the computer program 13 the steps in the embodiment of the user state analysis method described above are implemented, for example, steps S101 to S103 shown in FIG. 2.
  • the computer program 13 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 12 and executed by the processor 11 to complete This application.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 13 in the terminal device.
  • FIG. 5 is only an example of a server, and does not constitute a limitation on the server. It may include more or fewer components than those shown in the figure, or a combination of certain components, or different components, such as the
  • the server may also include input and output devices, network access devices, buses, and so on.
  • the processor 11 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 12 may be an internal storage unit of the server, such as a hard disk or memory of the server.
  • the memory 12 may also be an external storage device of the server, such as a plug-in hard disk equipped on the server, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card. (Flash Card) and so on.
  • the storage 12 may also include both an internal storage unit of the server and an external storage device.
  • the memory 12 is used to store the computer program and other programs and data required by the server.
  • the memory 12 can also be used to temporarily store data that has been output or will be output.
  • Fig. 6 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device of this embodiment includes a processor 21, a memory 22, a camera 23, a microphone 24, a sensor module 25, and a network interface 26.
  • the processor 21, the memory 22, the camera 23, the microphone 24, the sensor module 25, and the network interface 26 are connected by a communication bus 27.
  • the processor 21 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 22 may be an internal storage unit of the terminal device, such as a hard disk or memory of the terminal device.
  • the memory 22 may also be an external storage device of the terminal device, such as a plug-in hard disk equipped on the terminal device, a smart memory card (Smart Media Card, SMC), or a Secure Digital (SD) card, Flash Card, etc. Further, the memory 22 may also include both an internal storage unit of the terminal device and an external storage device.
  • the memory 22 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 22 can also be used to temporarily store data that has been output or will be output.
  • the camera 23 is used to capture still images or videos.
  • the microphone 24 is used to collect sound signals and convert the sound signals into electrical signals.
  • the sensor module 25 includes a heart rate sensor, a body surface sensor, and the like.
  • the network interface 26 may be used to send and receive information, and may include a wired interface and/or a wireless interface, and is generally used to establish a communication connection between the terminal device and the server.
  • FIG. 6 is only an example of a terminal device, and does not constitute a limitation on the terminal device, and may include more or fewer components than shown in the figure, or a combination of certain components, or different components.
  • the disclosed device/terminal device and method may be implemented in other ways.
  • the device/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, it can implement the steps of the foregoing method embodiments.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente demande est applicable dans le domaine de la technologie informatique et concerne un procédé d'analyse d'état d'utilisateur et un dispositif associé, le procédé d'analyse d'état d'utilisateur comprenant : des informations d'identité d'utilisateur et des informations de surveillance envoyées par un dispositif terminal sont obtenues par un serveur, les informations de surveillance comprenant au moins deux éléments parmi : des informations d'image, des informations vocales et des informations physiologiques d'utilisateur ; un paramètre d'identification d'état est déterminé en fonction des informations d'identité d'utilisateur ; un premier sous-état d'utilisateur correspondant aux informations d'image est déterminé en fonction du paramètre d'identification d'état et/ou un deuxième sous-état d'utilisateur correspondant aux informations vocales est déterminé en fonction du paramètre d'identification d'état et/ou un troisième sous-état d'utilisateur correspondant aux informations physiologiques de l'utilisateur est déterminé en fonction du paramètre d'identification d'état ; un état d'utilisateur est déterminé en fonction du premier sous-état d'utilisateur et/ou du deuxième sous-état d'utilisateur et/ou du troisième sous-état d'utilisateur et ainsi des informations de surveillance complètes peuvent être acquises de manière opportune et analysées et l'opportunité et la précision d'un état d'utilisateur obtenu sont améliorées.
PCT/CN2020/096321 2020-06-16 2020-06-16 Procédé d'analyse d'état d'utilisateur et dispositif associé WO2021253217A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096321 WO2021253217A1 (fr) 2020-06-16 2020-06-16 Procédé d'analyse d'état d'utilisateur et dispositif associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096321 WO2021253217A1 (fr) 2020-06-16 2020-06-16 Procédé d'analyse d'état d'utilisateur et dispositif associé

Publications (1)

Publication Number Publication Date
WO2021253217A1 true WO2021253217A1 (fr) 2021-12-23

Family

ID=79269028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096321 WO2021253217A1 (fr) 2020-06-16 2020-06-16 Procédé d'analyse d'état d'utilisateur et dispositif associé

Country Status (1)

Country Link
WO (1) WO2021253217A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016170005A1 (fr) * 2015-04-20 2016-10-27 Resmed Sensor Technologies Limited Détection et identification d'un être humain à partir de signaux caractéristiques
CN106650633A (zh) * 2016-11-29 2017-05-10 上海智臻智能网络科技股份有限公司 一种驾驶员情绪识别方法和装置
CN106682090A (zh) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 主动交互实现装置、方法及智能语音交互设备
CN109008952A (zh) * 2018-05-08 2018-12-18 深圳智慧林网络科技有限公司 基于深度学习的监护方法及相关产品
CN109077741A (zh) * 2018-08-21 2018-12-25 华南师范大学 心理状态识别方法及系统
WO2020007000A1 (fr) * 2018-07-04 2020-01-09 青岛海尔空调器有限总公司 Procédé, dispositif et système de commande de véhicule, et support d'enregistrement lisible sur ordinateur

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016170005A1 (fr) * 2015-04-20 2016-10-27 Resmed Sensor Technologies Limited Détection et identification d'un être humain à partir de signaux caractéristiques
CN106650633A (zh) * 2016-11-29 2017-05-10 上海智臻智能网络科技股份有限公司 一种驾驶员情绪识别方法和装置
CN106682090A (zh) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 主动交互实现装置、方法及智能语音交互设备
CN109008952A (zh) * 2018-05-08 2018-12-18 深圳智慧林网络科技有限公司 基于深度学习的监护方法及相关产品
WO2020007000A1 (fr) * 2018-07-04 2020-01-09 青岛海尔空调器有限总公司 Procédé, dispositif et système de commande de véhicule, et support d'enregistrement lisible sur ordinateur
CN109077741A (zh) * 2018-08-21 2018-12-25 华南师范大学 心理状态识别方法及系统

Similar Documents

Publication Publication Date Title
WO2020029406A1 (fr) Procédé et dispositif de d'identification d'émotion de visage humain, dispositif informatique et support de stockage
WO2020037454A1 (fr) Système et procédé de diagnostic et de traitement auxiliaires intelligents
KR101942444B1 (ko) 원격 미술 심리 상담을 위한 시스템
CN108197592B (zh) 信息获取方法和装置
WO2021000922A1 (fr) Procédé et dispositif d'identification de personne
CN107590953B (zh) 一种智能穿戴设备的报警方法、系统及终端设备
US20120313964A1 (en) Information processing apparatus, information processing method, and program
US10424405B2 (en) Method, system and apparatus for transcribing information using wearable technology
US20160217565A1 (en) Health and Fitness Monitoring via Long-Term Temporal Analysis of Biometric Data
Healy et al. Detecting demeanor for healthcare with machine learning
TWI823055B (zh) 電子資源推送方法及系統
US9747420B2 (en) System and method for diagnosing a patient based on an analysis of multimedia content
WO2021253217A1 (fr) Procédé d'analyse d'état d'utilisateur et dispositif associé
CN111933283A (zh) 一种健康监测方法及装置
US11315692B1 (en) Systems and methods for video-based user-interaction and information-acquisition
CN105989218B (zh) 一种信息化处理方法和装置
WO2016119498A1 (fr) Procédé et appareil de fourniture d'informations de santé
CN116959733A (zh) 医疗数据的分析方法、装置、设备及存储介质
WO2021249197A1 (fr) Procédé de traitement de données, appareil de traitement de données, et appareil de gestion de la santé
WO2023076456A1 (fr) Système de surveillance et procédé de télésurveillance de santé physiologique
Egwar et al. Towards adoption of standards for communication infrastructure/technologies in healthcare systems in LMICs: theories, practice and evaluation
US11432727B2 (en) SpO2 miniprogram: AI SpO2 measurement app
JP2002236759A (ja) 遠隔看護システム
CN111820872A (zh) 用户状态分析方法及相关设备
US10447968B1 (en) Controlled-environment facility video communications monitoring system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20941072

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20941072

Country of ref document: EP

Kind code of ref document: A1