CN109979569B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN109979569B
CN109979569B CN201910248046.8A CN201910248046A CN109979569B CN 109979569 B CN109979569 B CN 109979569B CN 201910248046 A CN201910248046 A CN 201910248046A CN 109979569 B CN109979569 B CN 109979569B
Authority
CN
China
Prior art keywords
user
cartoon
virtual
event
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910248046.8A
Other languages
Chinese (zh)
Other versions
CN109979569A (en
Inventor
贾艳滨
苏国辉
单炎炎
钟舒明
沈拾亦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Comzea Medical Science & Technology Co ltd
Guangzhou Reyzar Intelligent Technology Co ltd
Jinan University
Original Assignee
Guangzhou Comzea Medical Science & Technology Co ltd
Guangzhou Reyzar Intelligent Technology Co ltd
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Comzea Medical Science & Technology Co ltd, Guangzhou Reyzar Intelligent Technology Co ltd, Jinan University filed Critical Guangzhou Comzea Medical Science & Technology Co ltd
Priority to CN201910248046.8A priority Critical patent/CN109979569B/en
Publication of CN109979569A publication Critical patent/CN109979569A/en
Application granted granted Critical
Publication of CN109979569B publication Critical patent/CN109979569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides a data processing method and a data processing device, and the method comprises the following steps: recording a user-described hypoemotional event; switching scenes presented in the field of view of the user after the user description is finished, wherein the scenes comprise the recorded events causing the low emotion and reproduced by virtual characters; and after the virtual character finishes reappearing, outputting opinion publishing guide information to guide a user to publish opinions according to the description of the virtual character. According to the method and the device for recording the virtual character, the event described by the user can be recorded, then the recording of the user can be reproduced through the virtual character, and the emotion of the user can be interfered in a mode that the user is guided by the machine to make opinions of the recording reproduced by the virtual character.

Description

Data processing method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data processing method and apparatus.
Background
In the prior art, the emotion of a person is often interfered by Eye Movement Desensitization and Reprocessing (EMDR) technology, so that the person is relieved from negative emotions such as anxiety, dysphoria and the like, and the EMDR is proved to restore the balance of a brain neural network cognition and emotion processing system.
Conventional EMDR is typically deployed between an operator and a user, with emotional intervention on the user by the operator manually operating a device that may perform EMDR techniques. However, the prior art generally requires manual operation by an operator, requires the operator to perform tedious, boring and heavy physical activities, and has high labor cost.
Disclosure of Invention
In view of this, embodiments of the present application provide a data processing method and apparatus, so as to solve the technical problems that the conventional EMDR technique requires an operator to perform tedious, boring and heavy physical activities, and the labor cost is high.
In a first aspect, an embodiment of the present application provides a data processing method, where the method includes: recording a user-described hypoemotional event; switching scenes presented in the field of view of the user after the user description is finished, wherein the scenes comprise the recorded events causing the low emotion and reproduced by virtual characters; and after the virtual character finishes reappearing, outputting opinion publishing guide information to guide a user to publish opinions according to the description of the virtual character.
According to the method and the device for recording the virtual character, the event described by the user can be recorded, then the recording of the user can be reproduced through the virtual character, and the emotion of the user can be interfered in a mode that the user is guided by the machine to make opinions of the recording reproduced by the virtual character.
In one possible design, before the recording the user-described event causing a low mood, the method further comprises: and outputting the description event guide information to guide the user to describe the event causing the low emotion.
The device can output the description event guide information before recording the events described by the user, and guide the user to describe the events causing the low emotion, and because the user may have confusion about when to start describing the events, an initial prompt can be given to prompt the user when to start describing the events, so that the use experience of the user is improved.
In one possible design, before the outputting describes event guidance information, the method further includes: outputting emotion scoring guidance information to guide the user to score the current emotion; and determining that the score of the user about the current emotion accords with a preset range.
The device can also guide the user to score the current emotion before the user is guided to describe the event causing the low emotion, and judge whether the score of the current emotion of the user is within a preset range. If the score of the current emotion of the user is within the preset range, the fact that the user needs to describe the event causing the low emotion means that psychological intervention can be performed on the user.
In one possible design, the method further includes: monitoring an operation instruction input by a user, and determining that the user executes an operation instruction for picking up the virtual telescope presented in the scene through an operation device; switching a scene presented in the user field of view to a scene in the virtual telescope, the scene in the virtual telescope including cartoon images, the cartoon images moving the scene in the virtual telescope; guiding the eyeball of the user to move along with the movement of the cartoon image.
The data processing method provided by the embodiment of the application can also monitor whether an operation instruction for picking up the scene in the virtual telescope exists in the operation instructions input by the user, and if the operation instruction for picking up the virtual telescope by the user is received, the scene of the visual field of the user is switched, the cartoon image is presented in the scene, and the eyeball of the user is guided to move along with the movement of the cartoon image, so that the negative emotion of the user in the movement of the eyeball is relieved.
In one possible design, before the listening for the operation instruction input by the user, the method further includes: outputting recall event guidance information to guide the user to recall events that cause a low mood.
Before monitoring an operation instruction input by a user, the device can also guide the user to remember an event causing low emotion, and the device can gradually guide the user when the user is not clear of the flow, so that the emotion of the user is improved.
In one possible design, after the outputting of the recall event guidance information and before the monitoring of the operation instruction input by the user and the determining of the operation instruction of the user to pick up the virtual telescope presented in the scene through the operation device, the method further includes: a user score for a recalled event causing a low mood is received.
In order to determine the degree of the loss of the user's low emotion causing event, the user may be first guided to score the low emotion causing event, so as to obtain the score of the user, and further evaluate the degree of the loss of the user.
In one possible design, after the guiding the eyeball of the user to move with the movement of the cartoon image, the method further includes: receiving a rating of a recalled event causing a low mood after the user completes the movement of the eyeball with the movement of the cartoon image.
The data processing method provided by the application can also obtain the score of the user on the event causing the low-falling emotion after the user finishes the eyeball movement, so that whether the improvement of the low-falling emotion is relieved by the eyeball movement of the user can be evaluated.
In one possible design, the guiding the eyeball of the user to move along with the movement of the cartoon image includes: tracking the movement track of the eyeball of the user, and judging whether the eyeball of the user moves along with the movement of the cartoon image; if not, outputting prompt information to remind the user.
In the process of moving the eyeballs of the user, a special tool can be used for tracking the moving track of the eyeballs of the user, so that the user is ensured to follow the flow all the time, and the condition that the eyeballs of the user do not move along with the movement of the cartoon images due to reasons such as vague nerves is avoided.
In a second aspect, an embodiment of the present application provides a data processing apparatus, where the apparatus includes: an event recording module for recording events that cause a low mood described by a user; the scene switching module is used for switching scenes presented in the visual field of the user after the description of the user is finished, wherein the scenes comprise the events which are recorded by the virtual characters and cause the low emotion; and the guide information output module is used for outputting opinion publishing guide information after the virtual character finishes reappearing so as to guide a user to publish the opinion according to the description of the virtual character.
According to the method and the device for recording the virtual character, the event described by the user can be recorded, then the recording of the user can be reproduced through the virtual character, and the emotion of the user can be interfered in a mode that the user is guided by the machine to make opinions of the recording reproduced by the virtual character.
In one possible design, the apparatus further includes: and the event guide output module is used for outputting the description event guide information so as to guide the user to describe the event causing the low emotion. Since the user may be confused about when to start describing the event, a start prompt may be given to prompt the user when to start describing the event, thereby improving the user experience.
In a third aspect, the present application provides an electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the method of the first aspect or any of the alternative implementations of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect or any of the alternative implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method of the first aspect or any possible implementation manner of the first aspect.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
For a clearer explanation of the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a data processing method according to a first embodiment of the present application;
FIG. 2 is a partial schematic flow chart of a data processing method according to a first embodiment of the present application;
FIG. 3 is a partial flow chart of a data processing method according to a first embodiment of the present application;
fig. 4 is a detailed flowchart of step S240 shown in fig. 3;
fig. 5 is a block diagram of a data processing apparatus according to a second embodiment of the present application.
Detailed Description
First embodiment
Referring to fig. 1, fig. 1 shows a schematic flow chart of a data processing method according to a first embodiment of the present application, which specifically includes the following steps:
in step S110, events that cause a low mood described by the user are recorded.
Alternatively, the events described by the user may be recorded by a Virtual Reality (VR) device, or may be recorded by other devices, such as a terminal device with a display screen, for example, a computer, a television, and the like. The following description is not given by taking an event described by the VR device record to the user as an example:
the event causing the low emotion may be an event experienced by the user, an event experienced by the user witnessing others, or an event learned by the user from the terminal device by referring to information, video, or the like, and the specific scope of the event should not be construed as limiting the application. The mode of recording the event can be recording the voice description of the user, and can also be receiving the text input of the user through a keyboard.
And step S120, after the description of the user is finished, switching scenes presented in the visual field of the user, wherein the scenes comprise the events which are recorded by the virtual character and cause the low emotion.
The method for judging whether the user finishes the description is various, and can be used for judging whether a certain time period passes after the user stops the description; or the user can represent the user to complete the description by clicking a button or sending a control command after completing the description. The VR device switches scenes in the user's field of view, and the switched scenes may include at least one virtual character. The virtual character may restate a user's falling emotion causing event recorded by the VR device, and the virtual character may restate the falling emotion causing event by way of speech.
Step S130, after the virtual character is reproduced, outputting opinion posting guidance information to guide a user to post opinions according to the description of the virtual character.
The VR device may output opinion posting guidance information to guide the user to post an opinion on the description that the virtual character has just reproduced. The VR device may provide the output suggestion publication guidance information in a voice prompt manner, such as outputting voice: please give a guide to comforting the mr/girl with language; the VR device may also give guidance information by way of text prompts, for example, displaying text information in a scene presented in the user's field of view: please give guidance to comforting this mr/girl.
The switched scene may include a virtual character that recapitulates the user's falling mood causing events; the switched scene may also include two avatars, e.g., a virtual teenager that may restate the user's low mood causing events and a virtual adult for bringing the user into a role to thereby educate the virtual teenager.
According to the method and the device for recording the virtual character, the event described by the user can be recorded, then the recording of the user can be reproduced through the virtual character, and the emotion of the user can be interfered in a mode that the user is guided by the machine to make opinions of the recording reproduced by the virtual character.
Referring to fig. 2, in an embodiment, before step S110, the following steps may be further included:
and step S101, outputting emotion scoring guidance information to guide the user to score the current emotion.
The emotion scoring guidance information can also be given by the VR device through voice, such as VR device voice prompt: please rate today's mood: excellent, good, general, poor, and extremely poor; the VR device may also emerge from the five options in a scene presented in the user's field of view, awaiting the user's selection.
And step S102, determining that the score of the user about the current emotion accords with a preset range.
The user may dictate the score of the current mood, may enter text, or may select the user's current mood by clicking on an option presented in the field of view. The preset range is a range reflecting that the user is in a low emotion, and optionally, if the current emotion of the user is in three options, namely general, poor or extremely poor, the score of the user about the current emotion is determined to be in the preset range.
The device can also guide the user to score the current emotion before the user is guided to describe the event causing the low emotion, and judge whether the score of the current emotion of the user is within a preset range. If the score of the current emotion of the user is within the preset range, the fact that the user needs to describe the event causing the low emotion means that psychological intervention can be performed on the user.
In step S103, description event guidance information is output to guide the user to describe an event causing a low emotion.
The descriptive event guidance information may be given by the VR device in speech, e.g. the VR device may give by speech: "at present, the question can be completed within the next five minutes if you can recall and say it, which troubles you and makes you unconscious; the VR device may also present the descriptive event guidance information in text, for example, the following text may appear in the scene of the user's field of view: "at present, you can not recall and speak about something that bothers you and makes you unconscious, and finish speaking in the next five minutes".
Since the user may be confused about when to start describing the event, a start prompt may be given to prompt the user when to start describing the event, thereby improving the user experience.
Referring to fig. 3, in an embodiment of the present application, the method may further include the following steps:
and step S210, outputting recall event guide information to guide the user to recall events causing low emotions.
Recall that the event guidance information can be voice information, such as VR device can voice guide the user to recall and select an event that causes a mood dip, depression, anxiety, or can text the user to recall an event that causes a mood dip.
After step S210, the following steps may be further included: a user score for a recalled event causing a low mood is received.
The VR device may also guide the user to recall the pain caused by the event causing the mood depression to the user, and guide the user to give subjective score for the feeling of pain, for example, the VR device may output the following speech: if level 0 represents no pain and level 10 represents extreme pain, please select a number from 0-10 to indicate the degree of pain that this matter brings to your. The VR device may then receive a distress score for the user for the recalled event that caused the mood dip.
Prior to step S210, the method further comprises: the VR device voice guides the user to do abdominal breathing and muscle relaxation, coupled with soothing background music.
And step S220, monitoring an operation instruction input by a user, and determining that the user executes an operation instruction for picking up the virtual telescope presented in the scene through the operation equipment.
After the user completes scoring, the VR device may present the virtual telescope in a scene presented in the user's field of view and give a prompt message: please pick up the telescope to view the small animal at a distance.
And after the prompt message is given, monitoring an operation instruction input by the user, and judging whether the user executes the operation instruction for picking up the virtual telescope.
Step S230, switching the scene presented in the user view field to the scene in the virtual telescope, wherein the scene in the virtual telescope comprises cartoon images, and the scene in the virtual telescope moves.
After detecting that the user executes an operation instruction for picking up the virtual telescope, switching scenes in the user visual field, and presenting the scenes in the virtual telescope in the user visual field, such as cartoon images moving regularly in the scenes.
The scene and the cartoon image in the scene can be presented in sequence as follows:
the scene is the meadow, and the cartoon image in the scene is cartoon tortoise or cartoon hedgehog, and in comparatively reluctant background music, cartoon tortoise or cartoon hedgehog crawls on the meadow, and the route of crawling can be for crawling horizontally, crawling vertically, crawling from the upper left side to the lower right side, crawling from the upper right side to the lower left side, the circumference is crawled. The eyeball of the user moves along with the movement of the cartoon image.
After the operation of the scene is finished, the VR device can guide the user to score the current negative emotion intensity, and after the user finishes scoring, the VR device jumps to the next scene.
The scene is a forest, the cartoon image in the scene is a cartoon fawn or a cartoon monkey, the cartoon fawn or the cartoon monkey runs in the forest in slightly light background music, and the running path can be horizontal running, vertical running, running from the upper left to the lower right, running from the upper right to the lower left, and circumferential running. The eyeball of the user moves along with the movement of the cartoon image.
After the operation of the scene is finished, the VR device can guide the user to score the current negative emotion intensity, and after the user finishes scoring, the VR device jumps to the next scene.
The scene is the sky, and the cartoon image in the scene is the cartoon bird, and in lively background music, the cartoon bird flies in the sky, and the route of flying is: horizontal flight, vertical flight, flight from the upper left to the lower right, flight from the upper right to the lower left, and circumferential flight. The eyeball of the user moves along with the movement of the cartoon image.
After the scene operation is finished, the VR device may guide the user to score the current negative emotional intensity.
And step S240, guiding the eyeball of the user to move along with the movement of the cartoon image.
Alternatively, referring to fig. 4, step S240 may include the following steps:
step S241, tracking the eye movement trajectory of the user, determining whether the eye of the user moves along with the movement of the cartoon image, and if not, executing step S242.
And step S242, outputting prompt information to remind the user.
In the process of moving the eyeballs of the user, a special tool can be used for tracking the moving track of the eyeballs of the user, so that the user is ensured to follow the flow all the time, and the condition that the eyeballs of the user do not move along with the movement of the cartoon images due to reasons such as vague nerves is avoided.
If the VR device detects that the user's eye moves for 5 seconds following the cartoon image, the bonus image may be displayed in a scene presented in the user's field of view. Thereby encouraging the user's eye to follow the cartoon image.
After step S240, the method may further include: receiving a rating of a recalled event causing a low mood after the user completes the movement of the eyeball with the movement of the cartoon image.
The data processing method provided by the application can also obtain the score of the user on the event causing the low-falling emotion after the user finishes the eyeball movement, so that whether the improvement of the low-falling emotion is relieved by the eyeball movement of the user can be evaluated.
In the data processing method provided in the embodiment of the present application, the VR device may further perform the following method for mitigating emotion:
the user scores negative emotions caused by a specific event/scene according to guidance of a VR device, after the user scores the negative emotions, the scene presented in the visual field of the user is switched into a withered big tree, the user is led to close eyes through a voice band to conduct relaxation training, as the user gradually relaxes, the big tree slowly grows branches and leaves and flowers from the withered state and attracts birds to stay, and the actual growth degree can be determined according to the score difference value of the user before and after eye movement training shown in figure 3. The user is then directed to open his eyes to view the luxurious big tree presented in the field of view.
Second embodiment
Referring to fig. 5, fig. 5 shows a data processing apparatus according to a second embodiment of the present application, where the apparatus 500 includes:
an event recording module 510 for recording events that cause a low mood described by the user.
And a scene switching module 520, configured to switch a scene presented in the field of view of the user after the user description is finished, where the scene includes the event causing the low emotion recorded by the virtual character reproduction.
The guiding information output module 530 is configured to output opinion posting guiding information after the virtual character completes reproduction, so as to guide a user to post an opinion regarding the description of the virtual character.
The device further comprises: and the event guide output module is used for outputting the description event guide information so as to guide the user to describe the event causing the low emotion.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
The present application further provides an electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the method of the first embodiment.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first embodiment.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method of the first embodiment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
The system realizes the EMDR intelligent virtual reality, so that a user can experience emotional intervention scenes at home without going out. The psychological principle and the eye movement tracking technology of the EDMR are fully utilized, the whole emotional intervention process is simulated in detail by means of VR equipment, the unpleasant memory flashing in the brain of a user is broken, and the relaxed and calm mood is obtained. Recording the comfort words of the user to the virtual character through the light-operated mechanical operation and digital sound channel simulation simulated by VR equipment, repeatedly playing through three-dimensional surround stereo sound, and combining with the relaxing background music sound and light for desensitization and relaxation; the eyeball tracking technology is used for guaranteeing the accuracy of the eye movement during the eye movement training. The system is composed of a software product for eye movement training and hardware equipment, and the hardware is composed of VR equipment, a professional earphone, a computer, input and output equipment and the like.
The embodiment of the application provides a data processing method and a data processing device, and the method comprises the following steps: recording a user-described hypoemotional event; switching scenes presented in the field of view of the user after the user description is finished, wherein the scenes comprise the recorded events causing the low emotion and reproduced by virtual characters; and after the virtual character finishes reappearing, outputting opinion publishing guide information to guide a user to publish opinions according to the description of the virtual character. According to the method and the device for recording the virtual character, the event described by the user can be recorded, then the recording of the user can be reproduced through the virtual character, and the emotion of the user can be interfered in a mode that the user is guided by the machine to make opinions of the recording reproduced by the virtual character.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. A method of data processing, the method comprising:
recording, by the VR device, a low mood causing event described by the user;
after the user description is finished, switching scenes presented in the visual field of the user through a VR device, wherein the scenes comprise the events causing the low emotion and recorded by the virtual character reappearance;
outputting opinion publishing guide information through VR equipment after the virtual character finishes reappearing so as to guide a user to publish opinions according to reappearance description of the virtual character, wherein the reappeared virtual character is a virtual teenager, and the user guides and consolidates the description of the virtual teenager in the identity of a virtual adult;
receiving a user's score for a recalled event causing a low mood;
monitoring an operation instruction input by a user through VR equipment, and determining that the user executes the operation instruction for picking up the virtual telescope presented in the scene through the operation equipment; switching a scene presented in the user field of view to a scene in the virtual telescope, the scene in the virtual telescope including cartoon images, the cartoon images moving the scene in the virtual telescope; guiding the eyeball of the user to move along with the movement of the cartoon image, wherein the cartoon image comprises a cartoon tortoise or a cartoon hedgehog, a cartoon fawn or a cartoon monkey, and a cartoon bird, and the guiding the eyeball of the user to move along with the movement of the cartoon image comprises: guiding the eyeball of the user to move along with crawling of the cartoon tortoise or the cartoon hedgehog, move along with running of the cartoon fawn or the cartoon monkey and move along with flying of the cartoon bird in sequence;
if the VR equipment detects that the eyeballs of the user move for 5 seconds along with the cartoon images, bonus images are displayed in the scene of the visual field of the user;
receiving a score of a recalled event causing a low emotion after the user finishes moving the eyeballs along with the movement of the cartoon image;
and acquiring the score difference value between the eyeball of the user before and after moving along with the movement of the cartoon image, and presenting the growth degree of the big tree which is restored to be alive from the withered state in the visual field scene of the user, wherein the growth degree is determined by the score difference value between the eyeball before and after the eye movement training.
2. The method of claim 1, wherein prior to said recording a user-described falling emotion-causing event, the method further comprises:
and outputting the description event guide information to guide the user to describe the event causing the low emotion.
3. The method of claim 2, wherein prior to the outputting describing event guidance information, the method further comprises:
outputting emotion scoring guidance information to guide the user to score the current emotion;
and determining that the score of the user about the current emotion accords with a preset range.
4. The method of claim 1, wherein prior to the listening for the user-entered operating instructions, the method further comprises:
outputting recall event guidance information to guide the user to recall events that cause a low mood.
5. The method of claim 1, wherein the directing the user's eye to move with the movement of the cartoon image comprises:
tracking the movement track of the eyeball of the user, and judging whether the eyeball of the user moves along with the movement of the cartoon image;
if not, outputting prompt information to remind the user.
6. A data processing apparatus, characterized in that the apparatus comprises:
an event recording module to record, by the VR device, a falling emotion causing event described by the user;
the scene switching module is used for switching scenes displayed in the visual field of the user through VR equipment after the description of the user is finished, wherein the scenes comprise the recorded events causing the low emotion reproduced by the virtual character;
the guide information output module is used for outputting opinion publishing guide information through VR equipment after the virtual character finishes reappearing so as to guide a user to publish opinions aiming at the description of the virtual character, wherein the virtual character finishes reappearing is a virtual teenager, and the user guides and consolidates the description of the virtual teenager in the identity of a virtual adult;
the data processing apparatus is further configured to receive a user's score for a recalled event causing a low mood;
monitoring an operation instruction input by a user through VR equipment, and determining that the user executes the operation instruction for picking up the virtual telescope presented in the scene through the operation equipment; switching a scene presented in the user field of view to a scene in the virtual telescope, the scene in the virtual telescope including cartoon images, the cartoon images moving the scene in the virtual telescope; guiding the eyeball of the user to move along with the movement of the cartoon image, wherein the cartoon image comprises a cartoon tortoise or a cartoon hedgehog, a cartoon fawn or a cartoon monkey, and a cartoon bird, and the guiding the eyeball of the user to move along with the movement of the cartoon image comprises: guiding the eyeball of the user to move along with crawling of the cartoon tortoise or the cartoon hedgehog, move along with running of the cartoon fawn or the cartoon monkey and move along with flying of the cartoon bird in sequence;
if the VR equipment detects that the eyeballs of the user move for 5 seconds along with the cartoon images, bonus images are displayed in the scene of the visual field of the user;
receiving a score of a recalled event causing a low emotion after the user finishes moving the eyeballs along with the movement of the cartoon image;
and acquiring a score difference value between the front and the back of the user when the eyeball moves along with the movement of the cartoon image, and displaying the growth degree of the big tree recovering from the withering state in the visual field scene of the user, wherein the growth degree is positively correlated with the score difference value.
7. The apparatus of claim 6, further comprising:
and the event guide output module is used for outputting the description event guide information so as to guide the user to describe the event causing the low emotion.
CN201910248046.8A 2019-03-29 2019-03-29 Data processing method and device Active CN109979569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910248046.8A CN109979569B (en) 2019-03-29 2019-03-29 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910248046.8A CN109979569B (en) 2019-03-29 2019-03-29 Data processing method and device

Publications (2)

Publication Number Publication Date
CN109979569A CN109979569A (en) 2019-07-05
CN109979569B true CN109979569B (en) 2020-07-28

Family

ID=67081550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910248046.8A Active CN109979569B (en) 2019-03-29 2019-03-29 Data processing method and device

Country Status (1)

Country Link
CN (1) CN109979569B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636362B (en) * 2019-09-04 2022-05-24 腾讯科技(深圳)有限公司 Image processing method, device and system and electronic equipment
CN113075996A (en) * 2020-01-06 2021-07-06 京东方艺云科技有限公司 Method and system for improving user emotion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951062A (en) * 2014-03-24 2015-09-30 联想(新加坡)私人有限公司 Device and method of directing voice input based on eye tracking
CN106373172A (en) * 2016-08-31 2017-02-01 南京意斯伽生态科技有限公司 Psychotherapy simulation system based on virtual reality technology
CN107341333A (en) * 2017-04-05 2017-11-10 天使智心(北京)科技有限公司 A kind of VR apparatus and method for aiding in psychological consultation
CN108378860A (en) * 2018-03-07 2018-08-10 华南理工大学 Psychological pressure monitor system and method based on wearable device and android terminal
CN109498036A (en) * 2018-11-14 2019-03-22 苏州中科先进技术研究院有限公司 A kind of eye movement desensitization reprocessing interfering system and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120151319A1 (en) * 2010-12-14 2012-06-14 Clayton Stress Institute, Inc. Systems and methods for self directed stress assistance
KR102355455B1 (en) * 2016-06-20 2022-01-24 매직 립, 인코포레이티드 Augmented Reality Display System for Evaluation and Modification of Neurological Conditions Including Visual Processing and Perceptual States
CH712799A1 (en) * 2016-08-10 2018-02-15 Derungs Louis Virtual reality method and system implementing such method.

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951062A (en) * 2014-03-24 2015-09-30 联想(新加坡)私人有限公司 Device and method of directing voice input based on eye tracking
CN106373172A (en) * 2016-08-31 2017-02-01 南京意斯伽生态科技有限公司 Psychotherapy simulation system based on virtual reality technology
CN107341333A (en) * 2017-04-05 2017-11-10 天使智心(北京)科技有限公司 A kind of VR apparatus and method for aiding in psychological consultation
CN108378860A (en) * 2018-03-07 2018-08-10 华南理工大学 Psychological pressure monitor system and method based on wearable device and android terminal
CN109498036A (en) * 2018-11-14 2019-03-22 苏州中科先进技术研究院有限公司 A kind of eye movement desensitization reprocessing interfering system and method

Also Published As

Publication number Publication date
CN109979569A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN109074117B (en) Providing emotion-based cognitive assistant systems, methods, and computer-readable media
US20190156222A1 (en) Artificial intelligence platform with improved conversational ability and personality development
CN106663219A (en) Methods and systems of handling a dialog with a robot
Sofer How to Do Things with Demons: Conjuring Performatives in" Doctor Faustus"
CN109979569B (en) Data processing method and device
US11610092B2 (en) Information processing system, information processing apparatus, information processing method, and recording medium
WO2020205202A1 (en) Systems and methods for providing reading assistance using speech recognition and error tracking mechanisms
Park et al. Designing across senses: A multimodal approach to product design
KR102274646B1 (en) Method for providing cognitive reinforcement training for preventing and treating dementia and system thereof
CN109033448B (en) Learning guidance method and family education equipment
CN107463626A (en) A kind of voice-control educational method, mobile terminal, system and storage medium
US20140017652A1 (en) Memorizing mechanism for enhancing and improving a presentation by a speaker, professional performer, or student
CN113420131A (en) Reading guide method and device for children picture book and storage medium
Muse Performance and the Pace of Empathy
WO2023002694A1 (en) Information processing device and information processing method
Lawson Called to Preach: Fulfilling the High Calling of Expository Preaching
Haikal The Use of Directive Speech Acts in The Hunger Games Movie
CN108654061B (en) Voice playing method and device for segmented running
JP2017191151A (en) Foreign language learning system
JP6838739B2 (en) Recent memory support device
KR102328998B1 (en) Device for learning and communication of emotion, operation method for the same, and method for recommendation and playing of video
Gawdat That Little Voice in Your Head: Adjust the Code that Runs Your Brain
WO2023286224A1 (en) Conversation processing program, conversation processing system, and conversational robot
Maciver LONGUS’NARRATOR: A REASSESSMENT
JP2022088329A (en) Voice reading system, voice reading apparatus, information processing apparatus, imaging apparatus, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant