CN111986781B - Psychological treatment device and user terminal based on human-computer interaction - Google Patents

Psychological treatment device and user terminal based on human-computer interaction Download PDF

Info

Publication number
CN111986781B
CN111986781B CN202010855162.9A CN202010855162A CN111986781B CN 111986781 B CN111986781 B CN 111986781B CN 202010855162 A CN202010855162 A CN 202010855162A CN 111986781 B CN111986781 B CN 111986781B
Authority
CN
China
Prior art keywords
mixed reality
therapist
information
patient
scene data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010855162.9A
Other languages
Chinese (zh)
Other versions
CN111986781A (en
Inventor
聂镭
黄海
邹茂泰
聂颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longma Zhixin Zhuhai Hengqin Technology Co ltd
Original Assignee
Longma Zhixin Zhuhai Hengqin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longma Zhixin Zhuhai Hengqin Technology Co ltd filed Critical Longma Zhixin Zhuhai Hengqin Technology Co ltd
Priority to CN202010855162.9A priority Critical patent/CN111986781B/en
Publication of CN111986781A publication Critical patent/CN111986781A/en
Application granted granted Critical
Publication of CN111986781B publication Critical patent/CN111986781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Developmental Disabilities (AREA)
  • Psychology (AREA)
  • Computer Hardware Design (AREA)
  • Child & Adolescent Psychology (AREA)
  • Computer Graphics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Software Systems (AREA)
  • Social Psychology (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application is applicable to the technical field of information interaction, and provides a psychotherapy method, a psychotherapy device, a user terminal and a storage medium method based on man-machine interaction, wherein the psychotherapy method comprises the following steps: acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to a therapist; acquiring comprehensive characteristic information of a patient from an acquisition device, and displaying the comprehensive characteristic information to a therapist; identifying a first operation intention of a therapist according to the first human-computer interaction information of the therapist; and generating a first operation instruction corresponding to the first operation intention, and sending the first operation instruction to the mixed reality device. Therefore, on one hand, imagination exposure treatment or reality exposure treatment is carried out on the patient through the mixed reality equipment, immersive experience is given to the patient, on the other hand, a therapist can control the mixed reality equipment in real time, and the treatment effect is guaranteed.

Description

Psychological treatment device and user terminal based on human-computer interaction
Technical Field
The application belongs to the technical field of information interaction, and particularly relates to a psychotherapy method and device based on human-computer interaction, a user terminal and a storage medium.
Background
With the advancement of society, there is a growing psychological problem in people, in which post-traumatic stress disorder (PTSD), which is an anxiety disorder that may occur after an individual experiences or witnesses an event that is actual or threatening to life or physical integrity, is particularly troubling. The reaction to such events is usually fear, fright or helplessness, and a series of abnormal symptoms can be generated, such as limited wisdom range, narrow attention, incapability of understanding external stimulus, occurrence of misorientation and other acute stress reactions, symptoms of acute stress disorder, repeated and recurrent traumatic experience, continuous alertness increase, avoidance of similar or related situations to the stimulus, social adaptation disorder, mental collapse and other typical symptoms can be generated. And patients with post-traumatic stress disorder often have a high probability of suffering from other mental disorders and are more prone to health problems. Post-traumatic stress disorder can also lead to poor quality of life and significant economic losses. Therefore, post-traumatic stress disorder not only causes psychological pain to patients but also has public health and economic impact, so therapists have studied many treatment methods such as Stress Immune Training (SIT), Cognitive Processing Therapy (CPT), Eye Movement Desensitization and Reprocessing (EMDR), and prolonged exposure therapy (PE). While extended exposure therapy (PE) has proven to be one of the most effective treatments for PTSD.
The PE is divided into three stages, wherein the main task of the first stage is to collect data, establish a relationship, preliminarily evaluate, introduce the PTSD and PE principle, construct a fear grade, and learn relaxation training; the second phase is a 45-60 minute imaginary exposure treatment and a 30 minute field exposure treatment, where the patient is guided to learn to relax in the wound scene; the third phase is the end-of-treatment phase, summarizing the discussion of how to face reoccurring scenarios in the future, arranging home training for the patient to perform imagination exposure training himself to improve the ability to face the trauma experience and trauma scenario. After the three stages are completed, the individual can find the sense of safety, reduce cognitive deviation and reduce avoidance behaviors by imagining the situation of a traumatic event or directly exposing the individual in the traumatic memory, so that the fear of the traumatic memory is relieved, and a new meaning is given to stimulation similar to the traumatic memory.
However, in the psychological treatment process in the prior art, patients can only be indirectly guided to imagination exposure by means of words of therapists or real exposure by means of simple scenes, the universality is poor, and effective treatment effect on some patients with serious trauma is not achieved.
Disclosure of Invention
The embodiment of the application provides a psychotherapy method, a psychotherapy device, a user terminal and a storage medium based on human-computer interaction, and can solve the problem of poor universality in the psychotherapy process in the prior art.
In a first aspect, the present application provides a psychotherapy method based on human-computer interaction, including:
acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to the therapist;
the mixed reality scene data is generated by the server according to the spatial information of the real environment and the voice information of the patient describing the trauma event and is simultaneously sent to the user terminal and the mixed reality equipment;
acquiring comprehensive characteristic information of the patient from an acquisition device, and displaying the comprehensive characteristic information to the therapist;
the comprehensive characteristic information is obtained by the acquisition device from the patient when the mixed reality equipment plays the 3D picture corresponding to the mixed reality scene data to the patient;
identifying a first operation intention of the therapist according to the first human-computer interaction information of the therapist;
generating a first operation instruction corresponding to the first operation intention, and sending the first operation instruction to mixed reality equipment, wherein the first operation instruction is used for instructing the mixed reality equipment to execute a playing operation corresponding to the first operation instruction.
In one possible implementation manner of the first aspect, acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to the therapist includes:
acquiring mixed reality scene data;
generating a corresponding 2D picture according to the mixed reality scene data;
identifying a timestamp corresponding to the 2D picture, and forming a time line according to the timestamp;
determining a key timestamp in the time line according to the change value of the comprehensive characteristic information;
a 2D picture carrying the timeline and key timestamps in the timeline is displayed to the therapist.
The method includes the steps of acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to a position before a therapist, and further includes:
identifying a second operation intention of the therapist according to second human-computer interaction information of the therapist;
and if the second operation intention is a wake-up operation intention, generating a wake-up instruction corresponding to the wake-up operation intention, and sending the wake-up instruction to a server, wherein the wake-up instruction is used for instructing the server to generate mixed reality scene data according to space information of a real environment and voice information of a patient describing a wound event, and simultaneously sending the mixed reality scene data to the user terminal and mixed reality equipment.
In a possible implementation manner of the first aspect, the recognizing the first operation intention of the therapist according to the first human-machine interaction information of the therapist by the first human-machine interaction information is performed by:
acquiring voice information of the therapist;
determining a first operational intent of the therapist from the voice information.
In a possible implementation manner of the first aspect, the first human-machine interaction information is motion information;
identifying a first operational intent of the therapist from the first human-machine interaction information of the therapist, including:
acquiring action information of the therapist;
determining a first operational intent of the therapist from the action information.
In a possible implementation manner of the first aspect, the first human-machine interaction information is voice information and motion information;
identifying a first operational intent of the therapist from the first human-machine interaction information of the therapist, including:
acquiring voice information and action information of the therapist;
and determining the first operation intention of the therapist according to the voice information and the action information of the therapist.
In one possible implementation manner of the first aspect, determining the first operation intention of the therapist according to the voice information and the motion information of the therapist includes:
recognizing a spectrogram of the voice information;
inputting the spectrogram into an input layer of a pre-trained convolutional neural network, inputting action information into a full-connection layer of the pre-trained convolutional neural network, and outputting the first operation intention from an output layer of the pre-trained convolutional neural network.
In a second aspect, the present application provides a psychotherapy device based on human-computer interaction, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring mixed reality scene data of a patient from a server and displaying the mixed reality scene data to a therapist;
the mixed reality scene data is generated by the server according to the spatial information of the real environment and the voice information of the patient describing the trauma event and is simultaneously sent to the user terminal and the mixed reality equipment;
the second acquisition module is used for acquiring comprehensive characteristic information of the patient from an acquisition device and displaying the comprehensive characteristic information to the therapist;
the comprehensive characteristic information is obtained by a collecting device when the mixed reality equipment plays the mixed reality scene data to a patient;
the identification module is used for identifying a first operation intention of the therapist according to the first human-computer interaction information of the therapist;
the generating module is used for generating a first operation instruction corresponding to the first operation intention and sending the first operation instruction to mixed reality equipment, wherein the first operation instruction is used for instructing the mixed reality equipment to execute playing operation corresponding to the first operation instruction.
In a possible implementation manner of the second aspect, the first obtaining module includes:
the acquiring unit is used for acquiring mixed reality scene data;
the generating unit is used for generating a corresponding 2D picture according to the mixed reality scene data;
the identification unit is used for identifying a timestamp corresponding to the 2D picture and forming a time line according to the timestamp;
the determining unit is used for determining a key timestamp in the timeline according to the change value of the comprehensive characteristic information;
and the display unit is used for displaying the 2D picture carrying the time line and the key time stamp in the time line to a therapist.
In a possible implementation manner of the second aspect, the apparatus further includes:
the second identification module is used for identifying a second operation intention of the therapist according to second human-computer interaction information of the therapist;
and the awakening module is used for generating an awakening instruction corresponding to the awakening operation intention and sending the awakening instruction to the server if the second operation intention is the awakening operation intention, wherein the awakening instruction is used for indicating the server to generate mixed reality scene data according to space information of a real environment and voice information of a patient describing a wound event and simultaneously sending the mixed reality scene data to the user terminal and the mixed reality equipment.
In a possible implementation manner of the second aspect, the first human-machine interaction information is speech information, and the first recognition module includes:
the first acquisition submodule is used for acquiring the voice information of the therapist;
a first determination submodule for determining a first operational intention of the therapist based on the voice information.
In a possible implementation manner of the second aspect, the first human-machine interaction information is motion information, and the first identification module further includes:
the second acquisition submodule is used for acquiring the action information of the therapist;
a second determination submodule for determining the first operational intention of the therapist based on the action information.
In a possible implementation manner of the second aspect, the first human-machine interaction information is speech information and motion information, and the first recognition module includes:
the third acquisition submodule is used for acquiring the voice information and the action information of the therapist;
and the third determining submodule is used for determining the first operation intention of the therapist according to the voice information and the action information of the therapist.
In one possible implementation manner of the second aspect, the third determining sub-module includes:
the recognition subunit is used for recognizing the spectrogram of the voice information;
and the output subunit is used for inputting the spectrogram into an input layer of a pre-trained convolutional neural network, inputting the action information into a full-connection layer of the pre-trained convolutional neural network, and outputting the first operation intention from an output layer of the pre-trained convolutional neural network.
In a third aspect, an embodiment of the present application provides a user terminal, including a processor, where the processor implements the method of any one of the above first aspects when executing a computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is processed to implement the method of any one of the above first aspects.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, the user terminal sends the operation instruction to the mixed reality equipment through detecting the human-computer interaction information of a therapist in real time, and then the mixed reality equipment carries out imagination exposure treatment or real exposure treatment on a patient.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a psychotherapy system based on human-computer interaction according to an embodiment of the present application;
FIG. 2 is a flow chart of a psychotherapy method based on human-computer interaction according to an embodiment of the present application;
fig. 3 is a detailed flowchart of step S201 in fig. 1 of a psychotherapy method based on human-computer interaction according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a psychotherapy method based on human-computer interaction according to an embodiment of the present application before step S201 in fig. 2;
fig. 5 is a flowchart illustrating a specific process of step S203 in fig. 2 of a psychotherapy method based on human-computer interaction according to an embodiment of the present application;
fig. 6 is another specific flowchart of step S203 in fig. 2 of a psychotherapy method based on human-computer interaction according to an embodiment of the present application;
fig. 7 is a schematic specific flowchart of step S203 in fig. 2 of a psychotherapy method based on human-computer interaction according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a human-computer interaction-based psychotherapy device provided by an embodiment of the present application;
fig. 9 is a block diagram of a user terminal according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Referring to fig. 1, a schematic structural diagram of a psychotherapy system 1 provided in this embodiment of the present application includes a user terminal 10, a server 20 connected to the user terminal, and a mixed reality device 30 connected to the user terminal, where the user terminal may refer to a mobile terminal such as a mobile phone or a notebook, the server may refer to a backend server or a cloud server, for example, a GPU server, and the mixed reality device may be a mixed reality device of a Microsoft HoloLens model 2, or a mixed reality device of another model, for example, a mixed reality device of an Action One model, that is, the model of the mixed reality device is not limited in this embodiment of the present application.
The user terminal is used for identifying a second operation intention of the therapist according to the second human-computer interaction information of the therapist; and if the second operation intention is the awakening operation intention, generating an awakening instruction corresponding to the awakening operation intention, and sending the awakening instruction to the server, wherein the awakening instruction is used for instructing the server to generate mixed reality scene data according to the spatial information of the real environment and the voice information of the patient describing the trauma event, and simultaneously sending the mixed reality scene data to the user terminal and the mixed reality equipment. Wherein the mixed reality scene data is 3D image data.
The server is used for generating mixed reality scene data according to the spatial information of the real environment and the voice information of the patient describing the trauma event after acquiring the awakening instruction sent by the user terminal, and simultaneously sending the mixed reality scene data to the user terminal and the mixed reality equipment.
In the embodiment of the application, the server and the mixed reality equipment are in the sleep state under the normal condition, and the server and the mixed reality equipment start to work after receiving the awakening instruction of the user terminal instead of being in the working state all the time, so that the aim of saving computer network resources is fulfilled.
The mixed reality equipment is used for acquiring mixed reality scene data of the patient and playing a mixed reality scene picture corresponding to the mixed reality scene data to the patient.
Preferably, the psychotherapy system further comprises a collecting device (not shown in fig. 1) connected with the user terminal, wherein the collecting device comprises a depth camera, a wearable collecting device, a microphone, and the like.
The collecting device is used for collecting comprehensive characteristics of a patient, such as expression characteristics, action characteristics and the like of a patient with the depth camera, the wearable collecting device collects physiological characteristics of the patient, such as pulse, breath, sweat, heart rate, skin electricity and body temperature, and the microphone collects voice information characteristics of the patient.
The user terminal is used for acquiring mixed reality scene data of the patient from the server, displaying a 2D picture to a therapist according to the mixed reality scene data, acquiring comprehensive characteristic information of the patient from the acquisition device, and displaying the comprehensive characteristic information to the therapist; identifying a first operation intention of a therapist according to the man-machine interaction information of the therapist; and generating a first operation instruction corresponding to the first operation intention, and sending the first operation instruction to the mixed reality device, wherein the first operation instruction is used for instructing the mixed reality device to execute the playing operation corresponding to the first operation instruction.
In the embodiment of the application, when mixed reality equipment plays the 3D picture that corresponds with mixed reality scene data, therapist can look over the 2D picture that corresponds with mixed reality data through user terminal in real time to and look over patient's comprehensive characteristic, and therapist can guarantee treatment through the play operation of user terminal real time control mixed reality equipment.
Referring to fig. 2, a flowchart of a psychotherapy method based on human-computer interaction according to an embodiment of the present application is applied to the user terminal according to the above embodiment, where the method includes the following steps:
step S201, acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to the therapist.
The mixed reality scene data is generated by the server according to the spatial information of the real environment and the voice information of the patient describing the trauma event and is simultaneously sent to the user terminal and the mixed reality equipment. It should be noted that the mixed reality scene data is specifically three-dimensional image data, such as RGB three-dimensional image data and YUV three-dimensional image data.
Specifically, referring to fig. 3, a specific flowchart of step S201 in fig. 1 of a psychotherapy method based on human-computer interaction provided in an embodiment of the present application is shown, where acquiring mixed reality scene data of a patient from a server, and displaying the mixed reality scene data to a therapist, includes:
and S301, acquiring mixed reality scene data.
And S302, generating a corresponding 2D picture according to the mixed reality scene data.
For example, the 2D picture is formed by converting three-dimensional image data, which is mixed reality scene data, into two-dimensional image data, and performing analytic calculation on gradients of all pixels in the three-dimensional image data by using, for example, a DIB-R technique, which is a differentiable rendering framework, to obtain the two-dimensional image data.
And S302, identifying a time stamp corresponding to the 2D picture, and forming a time line according to the time stamp.
Where a timestamp is a sequence of characters that uniquely identifies the time of a moment. It can be understood that each time of the application corresponds to a time stamp, a time line can be formed according to the time stamp, then the time line is displayed subsequently, and the therapist can control the 3D picture played to the patient according to the displayed time line.
And step S303, determining a key timestamp in the time line according to the change value of the comprehensive characteristic information.
In specific application, the user terminal can acquire the expression characteristics, the action characteristics and the like of the patient from the depth camera, and acquire the physiological characteristics of the patient, such as pulse, breath, sweat, heart rate, skin electricity, body temperature and the like from the wearable acquisition device; the key timestamp refers to a timestamp in the timeline that characterizes the occurrence of the key event.
It is understood that when the integrated feature information change is greater than the change threshold, the corresponding timestamp is a key timestamp.
By way of example and not limitation, the expressive feature, the action feature, the physiological feature, etc. are divided by a classifier, such as a logistic regressor, naive bayes, a support vector machine, etc., into attributes of the expressive feature, such as positive emotion, negative emotion, or neutral emotion, represented by binary 100, binary 010, binary 001, respectively, such that if the attribute of the expressive feature is always positive emotion, i.e. binary 001, at a certain moment the attribute of the expressive feature becomes negative emotion, i.e. binary 010, that moment is considered as the key timestamp.
And S304, displaying a 2D picture carrying the time line and the key time stamp in the time line to a therapist.
In an alternative embodiment, referring to fig. 4, a flowchart of a psychotherapy method based on human-computer interaction before step S201 in fig. 2 according to an embodiment of the present application is shown, where acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to a front of a therapist, further includes:
and S401, identifying a second operation intention of the therapist according to the second human-computer interaction information of the therapist.
Wherein the second operation intention refers to an intention characterizing the operation of the therapist, for example, waking up the server to work.
And S402, if the second operation intention is a wake-up operation intention, generating a wake-up instruction corresponding to the wake-up operation intention, and sending the wake-up instruction to the server, wherein the wake-up instruction is used for instructing the server to generate mixed reality scene data according to the spatial information of the real environment and the voice information of the patient describing the trauma event, and simultaneously sending the mixed reality scene data to the user terminal and the mixed reality device.
The awakening intention means that the voice message of the therapist contains awakening words.
Specifically, the acquired voice information of the therapist is preprocessed, for example, pre-emphasis, framing and windowing are performed, then mel frequency cepstrum is performed to extract voice features, the voice features are input into a preset voice feature model according to the voice features, for example, the voice feature model is obtained through early training of a recurrent neural network, and the text information of the therapist is obtained; performing word segmentation on a voice text to obtain word segments forming text information, then finding out a vector corresponding to each word segment in a preset word vector database, for example, according to a word vector database obtained by word2vec model, fastText model or glove model training, performing similarity calculation, for example, cosine similarity calculation, on the vector of each word segment and a vector corresponding to a wakeup word to obtain a similarity value, and if the similarity value is greater than a similarity threshold, indicating that the voice of a therapist contains the wakeup word and a wakeup intention exists.
It is understood that the server is in a sleep state at ordinary times, and when the user terminal recognizes that the operation intention of the therapist is a wake-up intention, a wake-up instruction is generated so that the server is woken up, thereby performing an operation of generating mixed reality data.
And S202, acquiring comprehensive characteristic information of the patient from the acquisition device, and displaying the comprehensive characteristic information to a therapist.
The comprehensive characteristic information is that the acquisition device acquires the comprehensive characteristic information of the patient when the mixed reality equipment plays the mixed reality scene data to the patient.
And step S203, identifying the first operation intention of the therapist according to the first human-computer interaction information of the therapist.
The first human-computer interaction information comprises voice information.
Illustratively, referring to fig. 5, a specific flowchart of step S203 in fig. 2 of a psychotherapy method based on human-computer interaction provided by an embodiment of the present application is shown, where the identifying a first operation intention of a therapist according to first human-computer interaction information of the therapist includes:
step S501, acquiring voice information of a therapist.
Step S502, determining the first operation intention of the therapist according to the voice information.
The first operation intention refers to an intention of representing the operation of a therapist, for example, the speed of a broadcasting process in the process of adjusting pictures played to a patient by the mixed reality device. The first operation intention comprises a forward intention, a backward intention, a fast playing intention and a slow playing intention. It can be understood that, when the therapist views the mixed reality picture, that is, the picture that the patient is also viewing and the comprehensive features of the patient, the therapist can send an instruction to control the playing process of the mixed reality device through the user terminal. The forward intention means that the voice information of the therapist contains forward words, the backward intention means that the voice information of the therapist contains backward words, the fast-playing intention means that the voice information of the therapist contains fast-playing words, and the slow-playing intention means that the voice information of the therapist contains slow-playing words.
Specifically, the acquired voice information of the therapist is preprocessed, for example, pre-emphasis, framing and windowing are performed, then mel frequency cepstrum is performed to extract voice features, the voice features are input into a preset voice feature model according to the voice features, for example, the voice feature model is obtained through early training of a recurrent neural network, and the text information of the therapist is obtained; the word segmentation processing is carried out on the text information to obtain word segmentation forming the text information, then a vector corresponding to each word segmentation is found out in a preset word vector database, for example, a word vector database obtained according to word2vec model, fastText model or glove model training, similarity calculation, for example cosine similarity calculation, is carried out on the vector of each word segmentation and any one or more corresponding vectors of forward words, backward words, fast-playing words or slow-playing words to obtain a similarity value, if the similarity value is larger than a similarity threshold value, the voice of a therapist is represented to contain awakening words, and the intention corresponding to the words exists.
The first human-computer interaction information is action information.
Illustratively, referring to fig. 6, another specific flowchart of step S203 in fig. 2 of a psychotherapy method based on human-computer interaction provided in an embodiment of the present application is that, identifying a first operation intention of a therapist according to first human-computer interaction information of the therapist includes:
step S601, acquiring the action information of the therapist.
Step S602, determining a first operation intention of the therapist according to the action information.
The action information refers to a video representing actions of a therapist, the action information comprises hand up-sliding action, hand down-sliding action, hand left-sliding action and hand right-sliding action, the first operation intention comprises an advancing intention, a retreating intention, a fast-playing intention and a slow-playing intention, the hand up-sliding action corresponds to the fast-playing intention, the hand down-sliding action corresponds to the slow-playing intention, the hand left-sliding action corresponds to the advancing intention, and the hand right-sliding action corresponds to the retreating intention.
It is understood that the operation intention of the therapist may be determined according to the action of the therapist.
Specifically, videos representing actions of a therapist are obtained, features are extracted through technologies such as an RO1 extraction representation method, a network segmentation method or a space-time volume representation method, the features are encoded through visual word bags, and finally the feature codes are classified to obtain actions corresponding to each frame of video.
The first man-machine interaction information is voice information and action information.
Illustratively, referring to fig. 7, for yet another flowchart of step S204 in fig. 2 of the psychotherapy method based on human-computer interaction provided by the embodiment of the application, the identifying the first operation intention of the therapist according to the first human-computer interaction information of the therapist includes:
step S701, acquiring voice information and action information of a therapist;
step S702, determining the first operation intention of the therapist according to the voice information and the action information of the therapist.
It can be understood that there is error recognition only in the process of recognizing the operation intention of the therapist according to the voice information, so that the operation intention of the therapist can be obtained according to the voice information and the action information of the therapist, the recognition accuracy rate of the intention of the therapist is improved in the process of man-machine interaction, and therefore the therapist can better control the mixed reality device to perform psychological treatment on the patient, and the purpose of improving the psychological treatment effect is achieved.
Exemplarily, how to determine the first operation intention of the therapist based on the voice information and the motion information of the therapist is described in detail below:
firstly, recognizing a spectrogram of voice information.
The voice information includes keywords, such as forward words, backward words, fast-playing words, and slow-playing words.
Specifically, firstly, framing the voice information into x (m, n) (n is the frame length, and m is the number of frames); then, performing FFT to obtain X (M, N), performing a periodic diagram Y (M, N) (Y (M, N) = X (M, N) × X (M, N)') and then taking 10 × log10(Y (M, N)), converting M into a lower scale M according to time, and changing N into a lower scale N according to frequency; finally, (M, N, 10 × log10(Y (M, N)) is plotted as a two-dimensional graph, i.e., a spectrogram.
And secondly, inputting the spectrogram into an input layer of the pre-trained convolutional neural network, inputting the action information into a full-connection layer of the pre-trained convolutional neural network, and outputting the first operation intention from an output layer of the pre-trained convolutional neural network.
The motion information refers to videos representing motions of a therapist, and the motion information comprises hand up-sliding motions, hand down-sliding motions, hand left-sliding motions and hand right-sliding motions; the convolutional neural network comprises an input layer, two convolutional layers, two down-sampling layers, a full-link layer and an output layer.
It can be understood that the therapist may have the situation of simultaneously making voice and making action in the process of human-computer interaction, and then when the operation intention corresponding to the voice is consistent with the operation intention corresponding to the action, the operation intention of the therapist can be more accurately identified.
And S204, generating a first operation instruction corresponding to the first operation intention, and sending the first operation instruction to the mixed reality device, wherein the first operation instruction is used for instructing the mixed reality device to execute the playing operation corresponding to the first operation instruction.
The operation instruction refers to an instruction and a command for directing the mixed reality equipment to work. The operation instruction comprises an operation code and an operand, and after the mixed reality device receives the operation instruction, the mixed reality device executes corresponding playing operation according to the operation code and the operand in the operation number.
In the embodiment of the application, user terminal sends operating instruction to mixed reality equipment through real-time detection therapist's human-computer interaction information, on the one hand, exposes the treatment to the patient through mixed reality equipment, gives the immersive experience of patient, and on the other hand, therapist can control mixed reality equipment in real time, has guaranteed treatment.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 5 is a block diagram illustrating a psychotherapy device based on human-computer interaction according to an embodiment of the present application, which corresponds to the psychotherapy method based on human-computer interaction described in the foregoing embodiment, and only the relevant portions of the psychotherapy device based on human-computer interaction are shown for convenience of illustration.
Referring to fig. 8, the apparatus includes:
a first obtaining module 81, configured to obtain mixed reality scene data of a patient from a server, and display the mixed reality scene data to the therapist;
the mixed reality scene data is generated by the server according to the spatial information of the real environment and the voice information of the patient describing the trauma event and is simultaneously sent to the user terminal and the mixed reality equipment;
a second obtaining module 82, configured to obtain comprehensive characteristic information of the patient from a collecting device, and display the comprehensive characteristic information to the therapist;
the comprehensive characteristic information is obtained by a collecting device when the mixed reality equipment plays the mixed reality scene data to a patient;
an identifying module 83, configured to identify a first operation intention of the therapist according to the first human-computer interaction information of the therapist;
a generating module 84, configured to generate a first operation instruction corresponding to the first operation intention, and send the first operation instruction to a mixed reality device, where the first operation instruction is used to instruct the mixed reality device to perform a playing operation corresponding to the first operation instruction.
In a possible implementation manner, the first obtaining module includes:
the acquiring unit is used for acquiring mixed reality scene data;
the generating unit is used for generating a corresponding 2D picture according to the mixed reality scene data;
the identification unit is used for identifying a timestamp corresponding to the 2D picture and forming a time line according to the timestamp;
the determining unit is used for determining a key timestamp in the timeline according to the change value of the comprehensive characteristic information;
and the display unit is used for displaying the 2D picture carrying the time line and the key time stamp in the time line to a therapist.
In one possible implementation, the apparatus further includes:
the second identification module is used for identifying a second operation intention of the therapist according to second human-computer interaction information of the therapist;
and the awakening module is used for generating an awakening instruction corresponding to the awakening operation intention and sending the awakening instruction to the server if the second operation intention is the awakening operation intention, wherein the awakening instruction is used for indicating the server to generate mixed reality scene data according to space information of a real environment and voice information of a patient describing a wound event and simultaneously sending the mixed reality scene data to the user terminal and the mixed reality equipment.
In a possible implementation manner, the first human-machine interaction information is voice information, and the first recognition module includes:
the first acquisition submodule is used for acquiring the voice information of the therapist;
a first determination submodule for determining a first operational intention of the therapist based on the voice information.
In a possible implementation manner, the first human-machine interaction information is motion information, and the first identification module further includes:
the second acquisition submodule is used for acquiring the action information of the therapist;
a second determination submodule for determining the first operational intention of the therapist based on the action information.
In one possible implementation manner, the first human-machine interaction information is voice information and motion information, and the first recognition module includes:
the third acquisition submodule is used for acquiring the voice information and the action information of the therapist;
and the third determining submodule is used for determining the first operation intention of the therapist according to the voice information and the action information of the therapist.
In one possible implementation, the third determining sub-module includes:
the recognition subunit is used for recognizing the spectrogram of the voice information;
and the output subunit is used for inputting the spectrogram into an input layer of a pre-trained convolutional neural network, inputting the action information into a full-connection layer of the pre-trained convolutional neural network, and outputting the first operation intention from an output layer of the pre-trained convolutional neural network.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 9 is a schematic structural diagram of a user terminal according to an embodiment of the present application. As shown in fig. 9, the user terminal 9 of this embodiment includes: at least one processor 909, a memory 91 and a computer program 92 stored in the memory 91 and executable on the at least one processor 90, the processor 90 implementing the steps of the above-mentioned psychotherapy method based on human-computer interaction when executing the computer program 92.
The user terminal 9 may be a desktop computer, a notebook, a palm computer, or other computing devices. The user terminal may include, but is not limited to, a processor 90, a memory 91. Those skilled in the art will appreciate that fig. 9 is only an example of the user terminal 9, and does not constitute a limitation to the user terminal 9, and may include more or less components than those shown, or combine some components, or different components, such as an input/output device, a network access device, and the like.
The Processor 90 may be a Central Processing Unit (CPU), and the Processor 90 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may in some embodiments be an internal storage unit of the user terminal 9, such as a hard disk or a memory of the user terminal 9. It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a storage medium, specifically a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. A psychotherapy device based on human-computer interaction, comprising:
the system comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring mixed reality scene data of a patient from a server and displaying a 2D picture corresponding to the mixed reality scene data to a therapist;
the mixed reality scene data is generated by the server according to the spatial information of the real environment and the voice information of the patient describing the trauma event and is simultaneously sent to the user terminal and the mixed reality equipment;
the second acquisition module is used for acquiring comprehensive characteristic information of the patient from an acquisition device and displaying the comprehensive characteristic information to the therapist;
the comprehensive characteristic information is obtained by the acquisition device from the patient when the mixed reality equipment plays the 3D picture corresponding to the mixed reality scene data to the patient;
the identification module is used for identifying a first operation intention of the therapist according to the first human-computer interaction information of the therapist;
the generating module is used for generating a first operation instruction corresponding to the first operation intention and sending the first operation instruction to mixed reality equipment, wherein the first operation instruction is used for instructing the mixed reality equipment to execute playing operation corresponding to the first operation instruction.
2. The human-computer interaction based psychotherapy device of claim 1, wherein the first obtaining module comprises:
the acquiring unit is used for acquiring mixed reality scene data;
the generating unit is used for generating a corresponding 2D picture according to the mixed reality scene data;
the identification unit is used for identifying a timestamp corresponding to the 2D picture and forming a time line according to the timestamp;
the determining unit is used for determining a key timestamp in the timeline according to the change value of the comprehensive characteristic information;
and the display unit is used for displaying the 2D picture carrying the time line and the key time stamp in the time line to a therapist.
3. The human-computer interaction based psychotherapy device of claim 1, further comprising:
the second identification module is used for identifying a second operation intention of the therapist according to second human-computer interaction information of the therapist;
and the awakening module is used for generating an awakening instruction corresponding to the awakening operation intention and sending the awakening instruction to the server if the second operation intention is the awakening operation intention, wherein the awakening instruction is used for indicating the server to generate mixed reality scene data according to space information of a real environment and voice information of a patient describing a wound event and simultaneously sending the mixed reality scene data to the user terminal and the mixed reality equipment.
4. The psychotherapy apparatus based on human-computer interaction according to any one of claims 1 to 3, wherein the first human-computer interaction information is voice information;
a first identification module comprising:
the first acquisition submodule acquires voice information of the therapist;
a first determination submodule that determines a first operation intention of the therapist based on the voice information.
5. The psychotherapy apparatus based on human-computer interaction according to any one of claims 1 to 3, wherein the first human-computer interaction information is motion information;
a first identification module comprising:
the second acquisition submodule is used for acquiring the action information of the therapist;
a second determination submodule for determining the first operational intention of the therapist based on the action information.
6. The psychotherapy apparatus based on human-computer interaction according to any one of claims 1 to 3, wherein the first human-computer interaction information is voice information and motion information;
a first identification module comprising:
the third acquisition submodule is used for acquiring the voice information and the action information of the therapist;
and the third determining submodule is used for determining the first operation intention of the therapist according to the voice information and the action information of the therapist.
7. The psychotherapy device according to any one of claims 1 to 3, wherein the third determining submodule includes:
the recognition subunit is used for recognizing the spectrogram of the voice information;
and the output subunit is used for inputting the spectrogram into an input layer of a pre-trained convolutional neural network, inputting the action information into a full-connection layer of the pre-trained convolutional neural network, and outputting the first operation intention from an output layer of the pre-trained convolutional neural network.
8. A user terminal comprising a processor that when executing a computer program performs the steps of:
acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to a therapist;
the mixed reality scene data is generated by the server according to the spatial information of the real environment and the voice information of the patient describing the trauma event and is simultaneously sent to the user terminal and the mixed reality equipment;
acquiring comprehensive characteristic information of the patient from an acquisition device, and displaying the comprehensive characteristic information to the therapist;
the comprehensive characteristic information is obtained by the acquisition device from the patient when the mixed reality equipment plays the 3D picture corresponding to the mixed reality scene data to the patient;
identifying a first operation intention of the therapist according to the first human-computer interaction information of the therapist;
generating a first operation instruction corresponding to the first operation intention, and sending the first operation instruction to mixed reality equipment, wherein the first operation instruction is used for instructing the mixed reality equipment to execute a playing operation corresponding to the first operation instruction.
9. A storage medium storing a computer program, the computer program when executed by a processor implementing the steps of:
acquiring mixed reality scene data of a patient from a server, and displaying a 2D picture corresponding to the mixed reality scene data to a therapist;
the mixed reality scene data is generated by the server according to the spatial information of the real environment and the voice information of the patient describing the trauma event and is simultaneously sent to the user terminal and the mixed reality equipment;
acquiring comprehensive characteristic information of the patient from an acquisition device, and displaying the comprehensive characteristic information to the therapist;
the comprehensive characteristic information is obtained by the acquisition device from the patient when the mixed reality equipment plays the 3D picture corresponding to the mixed reality scene data to the patient;
identifying a first operation intention of the therapist according to the first human-computer interaction information of the therapist;
generating a first operation instruction corresponding to the first operation intention, and sending the first operation instruction to mixed reality equipment, wherein the first operation instruction is used for instructing the mixed reality equipment to execute a playing operation corresponding to the first operation instruction.
CN202010855162.9A 2020-08-24 2020-08-24 Psychological treatment device and user terminal based on human-computer interaction Active CN111986781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010855162.9A CN111986781B (en) 2020-08-24 2020-08-24 Psychological treatment device and user terminal based on human-computer interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010855162.9A CN111986781B (en) 2020-08-24 2020-08-24 Psychological treatment device and user terminal based on human-computer interaction

Publications (2)

Publication Number Publication Date
CN111986781A CN111986781A (en) 2020-11-24
CN111986781B true CN111986781B (en) 2021-08-06

Family

ID=73443937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010855162.9A Active CN111986781B (en) 2020-08-24 2020-08-24 Psychological treatment device and user terminal based on human-computer interaction

Country Status (1)

Country Link
CN (1) CN111986781B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365956A (en) * 2020-12-13 2021-02-12 龙马智芯(珠海横琴)科技有限公司 Psychological treatment method, psychological treatment device, psychological treatment server and psychological treatment storage medium based on virtual reality
CN112365957A (en) * 2020-12-13 2021-02-12 龙马智芯(珠海横琴)科技有限公司 Psychological treatment system based on virtual reality
CN113284582A (en) * 2021-05-14 2021-08-20 徐州市康复医院 Headset type music treatment device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109885277A (en) * 2019-02-26 2019-06-14 百度在线网络技术(北京)有限公司 Human-computer interaction device, mthods, systems and devices
US10456084B1 (en) * 2018-08-02 2019-10-29 Yung Hsiang Information Management, Co. Ltd Intelligent hospital bed
CN110931111A (en) * 2019-11-27 2020-03-27 昆山杜克大学 Autism auxiliary intervention system and method based on virtual reality and multi-mode information
CN111354440A (en) * 2020-03-02 2020-06-30 浙江连信科技有限公司 Fire fighter psychological intervention method and device based on human-computer interaction and electronic equipment
CN111652155A (en) * 2020-06-04 2020-09-11 北京航空航天大学 Human body movement intention identification method and system
CN111724914A (en) * 2020-07-28 2020-09-29 重庆警察学院 Police psychological crisis intervention treatment vehicle and mobile psychological crisis intervention treatment system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10517521B2 (en) * 2010-06-07 2019-12-31 Affectiva, Inc. Mental state mood analysis using heart rate collection based on video imagery
CN109192310A (en) * 2018-07-25 2019-01-11 同济大学 A kind of undergraduate psychological behavior unusual fluctuation scheme Design method based on big data
CN111145865A (en) * 2019-12-26 2020-05-12 中国科学院合肥物质科学研究院 Vision-based hand fine motion training guidance system and method
CN111415759A (en) * 2020-03-03 2020-07-14 北京中锐福宁控股集团有限公司 Human-computer interaction method and system of traditional Chinese medicine pre-diagnosis robot based on inquiry
CN111415724A (en) * 2020-03-03 2020-07-14 北京中锐福宁控股集团有限公司 Human-computer interaction method and system of traditional Chinese medicine pre-diagnosis robot based on palpation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10456084B1 (en) * 2018-08-02 2019-10-29 Yung Hsiang Information Management, Co. Ltd Intelligent hospital bed
CN109885277A (en) * 2019-02-26 2019-06-14 百度在线网络技术(北京)有限公司 Human-computer interaction device, mthods, systems and devices
CN110931111A (en) * 2019-11-27 2020-03-27 昆山杜克大学 Autism auxiliary intervention system and method based on virtual reality and multi-mode information
CN111354440A (en) * 2020-03-02 2020-06-30 浙江连信科技有限公司 Fire fighter psychological intervention method and device based on human-computer interaction and electronic equipment
CN111652155A (en) * 2020-06-04 2020-09-11 北京航空航天大学 Human body movement intention identification method and system
CN111724914A (en) * 2020-07-28 2020-09-29 重庆警察学院 Police psychological crisis intervention treatment vehicle and mobile psychological crisis intervention treatment system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases";Valeska 等;《Neural Regeneration Research》;20140430;第9卷(第8期);第888-896页 *
杨益平 等." 基于计算机视觉的手势识别人机交互技术".《电子技术与软件工程》.2018,(第12期), *

Also Published As

Publication number Publication date
CN111986781A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111986781B (en) Psychological treatment device and user terminal based on human-computer interaction
US8903176B2 (en) Systems and methods using observed emotional data
US11205408B2 (en) Method and system for musical communication
US11636859B2 (en) Transcription summary presentation
CN113508369A (en) Communication support system, communication support method, communication support program, and image control program
CN107393529A (en) Audio recognition method, device, terminal and computer-readable recording medium
CN112007255B (en) Psychological treatment method, device and system based on mixed reality and server
CN111414506B (en) Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
US11615572B2 (en) Systems and methods for automated real-time generation of an interactive attuned discrete avatar
CN112365957A (en) Psychological treatment system based on virtual reality
CN107463684A (en) Voice replying method and device, computer installation and computer-readable recording medium
CN113556603B (en) Method and device for adjusting video playing effect and electronic equipment
CN112149599B (en) Expression tracking method and device, storage medium and electronic equipment
EP4053792A1 (en) Information processing device, information processing method, and artificial intelligence model manufacturing method
CN112365956A (en) Psychological treatment method, psychological treatment device, psychological treatment server and psychological treatment storage medium based on virtual reality
US11183189B2 (en) Information processing apparatus and information processing method for controlling display of a user interface to indicate a state of recognition
US20200125788A1 (en) Information processing device and information processing method
CN113763932B (en) Speech processing method, device, computer equipment and storage medium
CN114125149A (en) Video playing method, device, system, electronic equipment and storage medium
CN117591660B (en) Material generation method, equipment and medium based on digital person
TWI824348B (en) Training system, training management method and training apparatus
CN115641938A (en) Virtual reality method and system based on vestibular migraine rehabilitation training
CN116958328A (en) Method, device, equipment and storage medium for synthesizing mouth shape
CN117201706A (en) Digital person synthesis method, system, equipment and medium based on control strategy
CN117951261A (en) Insurance digital person dialogue generation method, apparatus, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 519031 office 1316, No. 1, lianao Road, Hengqin new area, Zhuhai, Guangdong

Patentee after: LONGMA ZHIXIN (ZHUHAI HENGQIN) TECHNOLOGY Co.,Ltd.

Address before: Room 417.418.419, building 20, creative Valley, 1889 Huandao East Road, Hengqin New District, Zhuhai City, Guangdong Province

Patentee before: LONGMA ZHIXIN (ZHUHAI HENGQIN) TECHNOLOGY Co.,Ltd.