CN111933277A - Method, device, equipment and storage medium for detecting 3D vertigo - Google Patents
Method, device, equipment and storage medium for detecting 3D vertigo Download PDFInfo
- Publication number
- CN111933277A CN111933277A CN202010754321.6A CN202010754321A CN111933277A CN 111933277 A CN111933277 A CN 111933277A CN 202010754321 A CN202010754321 A CN 202010754321A CN 111933277 A CN111933277 A CN 111933277A
- Authority
- CN
- China
- Prior art keywords
- vertigo
- user
- detected
- equipment
- posture information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 208000012886 Vertigo Diseases 0.000 title claims abstract description 138
- 231100000889 vertigo Toxicity 0.000 title claims abstract description 138
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000001514 detection method Methods 0.000 claims abstract description 91
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 48
- 230000004424 eye movement Effects 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 55
- 208000002173 dizziness Diseases 0.000 claims description 31
- 238000004590 computer program Methods 0.000 claims description 6
- 230000007787 long-term memory Effects 0.000 claims description 6
- 230000006403 short-term memory Effects 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000015654 memory Effects 0.000 description 5
- 210000001508 eye Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000036772 blood pressure Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 206010067484 Adverse reaction Diseases 0.000 description 1
- 206010028813 Nausea Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006838 adverse reaction Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008693 nausea Effects 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000015541 sensory perception of touch Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001720 vestibular Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Ophthalmology & Optometry (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a method, a device, equipment and a storage medium for detecting 3D vertigo. The detection method of the 3D vertigo comprises the following steps: acquiring posture information and eyeball movement data of a user to be detected; and determining the vertigo state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained vertigo detection network model. The embodiment of the invention is based on the acquisition of the posture information and the eyeball movement data of the user to be detected wearing the VR equipment, and the posture information and the eyeball movement data are used as the input of the pre-trained vertigo detection network model, and the output result is the vertigo state detection result of the user to be detected. Because the gesture information and the eye movement data of the user to be detected can be acquired in real time through the existing sensor in the VR equipment, and the information acquisition is convenient, the 3D vertigo of the user in the process of using the VR equipment can be detected in real time, and the equipment use experience of the user is improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a method, a device, equipment and a storage medium for detecting 3D vertigo.
Background
Virtual Reality (VR) technology has been widely used in the last decade, especially in the last few years. However, 3D vertigo is still an important factor affecting the further spread of VR technology.
For the detection of 3D vertigo, a Simulator vertigo Questionnaire (SSQ) method can be used, but this method is a subjective measurement method and cannot be used for real-time vertigo detection such as VR games. Or the data such as electroencephalogram, heart rate, blood pressure and skin current can be used for detecting the 3D vertigo in real time, however, sensors for collecting the data are difficult to arrange in VR equipment, and the sensors are redundant to the VR equipment, so that the cost of the VR equipment is increased.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for detecting 3D vertigo so as to realize real-time detection of the 3D vertigo.
In a first aspect, an embodiment of the present invention provides a method for detecting 3D vertigo, including:
acquiring posture information and eyeball movement data of a user to be detected;
and determining the vertigo state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained vertigo detection network model.
Optionally, obtaining posture information of the user to be detected, including;
acquiring real posture information of a user to be detected through a posture detection device in the VR equipment, and/or,
and acquiring virtual attitude information of an avatar of a user to be detected in a virtual environment through a virtual sensor in the VR equipment.
Optionally, the obtaining of the eye movement data of the user to be detected includes:
and acquiring eyeball movement data of the user to be detected through an eyeball tracking sensor in the VR equipment.
Optionally, the method further includes:
adjusting a vertigo stimulus source in a virtual environment according to the vertigo state detection result; wherein the vertigo stimulus source comprises at least one of: the brightness of the virtual environment and the moving speed of the virtual image of the user to be detected in the virtual environment.
Optionally, the training process of the vertigo detection network model includes:
determining a training sample set based on the obtained sample posture information and the sample eye movement data;
and training according to the training sample set and a predetermined label dizziness state matched with the training sample to obtain a dizziness detection network model.
Optionally, training according to the training sample set and a predetermined tag vertigo state matched with the training sample to obtain an vertigo detection network model, including:
and training to obtain a dizziness detection network model according to the training sample set and a predetermined label dizziness state matched with the training sample based on the long-term and short-term memory network.
In a second aspect, an embodiment of the present invention further provides a device for detecting 3D vertigo, including:
the data acquisition module is used for acquiring the posture information and the eyeball movement data of the user to be detected;
and the dizziness detection module is used for determining the dizziness state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained dizziness detection network model.
Optionally, the data obtaining module includes a posture information obtaining unit, and is specifically configured to:
acquiring real posture information of a user to be detected through a posture detection device in the VR equipment, and/or,
and acquiring virtual attitude information of an avatar of a user to be detected in a virtual environment through a virtual sensor in the VR equipment.
Optionally, the data acquiring module includes an eye movement data acquiring unit, and is specifically configured to:
and acquiring eyeball movement data of the user to be detected through an eyeball tracking sensor in the VR equipment.
Optionally, the apparatus further comprises a stimulus source adjustment module, configured to:
adjusting a vertigo stimulus source in a virtual environment according to the vertigo state detection result; wherein the vertigo stimulus source comprises at least one of: the brightness of the virtual environment and the moving speed of the virtual image of the user to be detected in the virtual environment.
Optionally, the apparatus further includes a network model training module, configured to train the vertigo detecting network model, where the network model training module includes:
the sample set determining unit is used for determining a training sample set based on the acquired sample posture information and the sample eye movement data;
and the training unit is used for training according to the training sample set and a predetermined label dizziness state matched with the training sample to obtain a dizziness detection network model.
Optionally, the training unit is specifically configured to:
and training to obtain a dizziness detection network model according to the training sample set and a predetermined label dizziness state matched with the training sample based on the long-term and short-term memory network.
In a third aspect, an embodiment of the present invention further provides an apparatus, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method for detecting 3D vertigo as in any embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for detecting 3D vertigo according to any embodiment of the present invention.
The embodiment of the invention is based on the acquisition of the posture information and the eyeball movement data of the user to be detected wearing the VR equipment, and the posture information and the eyeball movement data are used as the input of the pre-trained vertigo detection network model, and the output result is the vertigo state detection result of the user to be detected. Because the gesture information and the eye movement data of the user to be detected can be acquired in real time through the existing sensor in the VR equipment, and the information acquisition is convenient, the 3D vertigo of the user in the process of using the VR equipment can be detected in real time, and the equipment use experience of the user is improved.
Drawings
Fig. 1 is a flowchart of a method for detecting 3D vertigo in a first embodiment of the invention;
FIG. 2 is a flowchart of a method for detecting 3D vertigo in a second embodiment of the invention;
fig. 3 is a schematic structural diagram of a 3D vertigo detecting device according to a third embodiment of the invention;
fig. 4 is a schematic structural diagram of an apparatus in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for detecting 3D vertigo in a first embodiment of the present invention, and this embodiment is applicable to a case where 3D vertigo is detected in real time when a user uses a VR device. The method may be performed by a 3D vertigo detection apparatus, which may be implemented in software and/or hardware, and may be configured in a device, for example, a device with communication and computing capabilities such as a backend server. As shown in fig. 1, the method specifically includes:
The user to be detected refers to a user who is using the VR device, and the user to be detected can use the VR device to watch 3D images or experience 3D games, but 3D vertigo can be generated in the watching and game experience process. The 3D vertigo is mostly caused by too long time when watching a 3D movie or playing a 3D game, and has individual difference, and the time length accepted by each person is different. The 3D vertigo is related to physiological structures of a person who obtains information on external environments through auditory sense, visual sense and tactile sense. Through long-term evolution, the various human sensory organs are highly coordinated. For example, there is an organ called "vestibular" in the inner ear, which is responsible for sensing the balance of the body, such as the direction of motion and acceleration of the body. There is also a specialized optic nerve in the eye to sense movement. The motor sensing organs are very sensitive, and can transmit sensed motion information to the nerve center, and the nerve center controls the human body to react properly to deal with the influence of the motion on the human body. When the sensed amplitude or frequency of the movement exceeds a proper value, the nerve center can cause people to feel uncomfortable, such as dizziness and nausea, and the aim of preventing the current movement from continuing is achieved objectively. This is in fact a self-protective mechanism, since large amplitude, high frequency movements are likely to cause damage to the human body. Therefore, 3D vertigo detection needs to be carried out on a user using VR equipment, adverse reactions of the user to be detected are timely responded, and 3D experience of the user is improved.
The gesture information refers to actions performed by the user to be detected when using the VR device, including head actions and/or limb actions. Illustratively, the posture information refers to a head turning motion of the user while viewing the 3D image, or refers to a body motion of the user while playing a 3D game, and a body motion of an avatar in the game. The motion state and the motion amplitude of the user to be detected can be determined through the posture information. The eyeball movement data refers to measuring the movement condition of eyes by extracting eyeball characteristic data, and further determining the sight line direction or the eye fixation point position of a user to be detected and determining the eyeball movement amplitude.
Illustratively, when a user experiences a 3D game, a real-time motion posture of the user and eyeball motion data at the same time are obtained, where the real-time motion posture of the user refers to a limb posture of the user in reality and a motion posture of the user in a virtual game.
And 102, determining a vertigo state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained vertigo detection network model.
And taking the acquired posture information and the eyeball motion data as the input of a pre-trained vertigo detection network model, and obtaining a model output result, namely the vertigo state detection result of the user to be detected. For example, the model output result may be in a vertigo state and not in a vertigo state, for example, the model output result is 0 or 1,0 indicates that the user to be detected is not in the vertigo state, and 1 indicates that the user to be detected is in the vertigo state. Furthermore, the vertigo degree of the user to be detected can be determined according to the output result of the model, for example, the output result of the model is any one value from 0 to 10, and the magnitude of the value represents the vertigo degree of the user to be detected.
Specifically, the gesture information and eyeball motion data of the user in the 3D game, which are acquired in real time, are input into the vertigo detection network model, so that a real-time vertigo state detection result of the user in the game is obtained, and the game experience of the user is favorably determined.
In an optional embodiment, the method further comprises:
adjusting a vertigo stimulus source in the virtual environment according to the vertigo state detection result; wherein the vertigo stimulus comprises at least one of: the brightness of the virtual environment and the moving speed of the virtual image of the user to be detected in the virtual environment.
Adjusting a vertigo stimulus source in the virtual environment when the user is in the vertigo state. For example, when the user is watching the 3D video, if it is detected that the vertigo state detection result of the user is in the vertigo state at any time, the vertigo stimulus source is adjusted by reducing the brightness of the 3D video. When the user is in the 3D game experience, if the vertigo state detection result of the user at any moment is detected to be in the vertigo state, the vertigo stimulus source is adjusted by reducing the moving speed of the game image of the user in the game.
Further, when the vertigo degree is included in the vertigo state detection result, the vertigo stimulus sources in the virtual environment are adjusted in different stages according to the vertigo degree. Illustratively, when the user is in a 3D image viewing, if it is detected that the vertigo state detection result of the user is in the vertigo state at any time and the vertigo degree is continuously deepened, the brightness of the 3D image is continuously reduced to realize the stage adjustment of the vertigo stimulus source until the 3D image is turned off or the user is reminded to rest, for example, a mapping relationship between the vertigo degree and the brightness is established in advance, and the brightness is adjusted according to the mapping relationship. When a user is in 3D game experience, if the vertigo state detection result of the user at any moment is detected to be in the vertigo state and the vertigo degree is continuously deepened, the moving speed of the game image of the user in the game is continuously reduced until the game is stopped or the user is reminded to take a rest.
Through according to vertigo stimulus in the virtual environment of vertigo state testing result adjustment, can effectively improve the experience sense that the user used VR equipment, avoid influencing user's healthy because of 3D vertigo.
In an alternative embodiment, the training process of the vertigo detection network model comprises:
determining a training sample set based on the obtained sample posture information and the sample eye movement data;
and training according to the training sample set and a predetermined label dizziness state matched with the training sample to obtain a dizziness detection network model.
And acquiring sample posture information and sample eye movement data of a sample user, and determining the dizziness state of the sample user when the sample user is in the state of the sample posture information and the sample eye movement data, wherein the dizziness state is used as the label dizziness state matched with the training sample determined according to the sample posture information and the sample eye movement data. For example, the vertigo state of the sample user may be determined by SSQ, or may be determined by electroencephalogram, heart rate, blood pressure, skin current, and other data, without limitation.
In an alternative embodiment, the vertigo detection network model is trained according to a training sample set and a predetermined tag vertigo state matched with the training sample, and the vertigo detection network model comprises:
and training to obtain a dizziness detection network model according to the training sample set and a predetermined label dizziness state matched with the training sample based on the long-term and short-term memory network.
Long-Short Term Memory networks (LSTM) are suitable for handling and predicting significant events of very Long interval and delay in time series. Therefore, accuracy of learning vertigo features can be improved based on the long-term and short-term memory network, and accuracy of determining vertigo detection results is further improved.
The embodiment of the invention is based on the acquisition of the posture information and the eyeball movement data of the user to be detected wearing the VR equipment, and the posture information and the eyeball movement data are used as the input of the pre-trained vertigo detection network model, and the output result is the vertigo state detection result of the user to be detected. Because the gesture information and the eye movement data of the user to be detected can be acquired in real time through the existing sensor in the VR equipment, and the information acquisition is convenient, the 3D vertigo of the user in the process of using the VR equipment can be detected in real time, and the equipment use experience of the user is improved.
Example two
Fig. 2 is a flowchart of a method for detecting 3D vertigo in the second embodiment of the present invention, and the second embodiment is further optimized based on the first embodiment. As shown in fig. 2, the method includes:
The real posture information refers to the posture information of the user to be detected in reality; the virtual posture information refers to the posture information of the user to be detected in the virtual reality.
For example, if the user to be detected does not project the own avatar in the 3D image when watching the 3D image through the VR device, the virtual posture information of the avatar of the user to be detected in the virtual environment is obtained only through the virtual sensor in the VR device. If the user to be detected experiences a 3D game and the like through the VR equipment and projects the image of the user to be detected in the virtual environment, the real posture information of the user to be detected is obtained through a posture detection device in the VR equipment, and the virtual posture information of the virtual image of the user to be detected in the virtual environment is obtained through a virtual sensor in the VR equipment. And for VR equipment capable of generating virtual images in a virtual environment, the VR equipment is provided with a posture detection device and a virtual sensor capable of detecting real posture information and virtual posture information of a user, so that the posture information is acquired by using the existing detection device in the VR equipment in the embodiment of the invention, the cost of vertigo detection is not increased, and the VR equipment has universality.
Further, the real pose information of the user includes acquiring head real pose information of a head mounted display of the user using a VR device having 6DOF positioning capability, and/or detecting hand real pose information and leg real pose information of the user using a pose detection apparatus worn on a hand or a leg.
Because the requirement of a user for 3D experience is higher and higher, an eyeball tracking sensor is equipped in the VR device to acquire eyeball motion data of the user, and thus, more comprehensive service is provided for the user according to the eyeball motion data. For VR equipment of an existing eyeball tracking sensor, eyeball movement data acquired by the VR equipment is directly acquired and used as input of the dizziness detection network model, and convenience and simplicity of acquisition of model input data can be improved. Avoid using other sensor that are not suitable for in the VR equipment to cause the cost increase of VR equipment.
To improve the detection efficiency, step 201 and step 202 may be performed simultaneously.
And step 203, determining a vertigo state detection result of the user to be detected according to the posture information and the eyeball motion data based on the vertigo detection network model trained in advance.
According to the embodiment of the invention, the vertigo state of the user to be detected is detected by using the data detected by the existing detection device and the sensor in the VR equipment, so that the vertigo state detection efficiency is improved, and the cost is reduced. And the data can be obtained in real time, so that the real-time vertigo state detection is realized, and the vertigo state of the user can be responded in time.
The embodiment of the invention is based on the acquisition of the posture information and the eyeball movement data of the user to be detected wearing the VR equipment, and the posture information and the eyeball movement data are used as the input of the pre-trained vertigo detection network model, and the output result is the vertigo state detection result of the user to be detected. Because the gesture information and the eye movement data of the user to be detected can be acquired in real time through the existing sensor in the VR equipment, and the information acquisition is convenient, the 3D vertigo of the user in the process of using the VR equipment can be detected in real time, and the equipment use experience of the user is improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a 3D vertigo detecting apparatus according to a third embodiment of the present invention, which is applicable to a case where 3D vertigo is detected in real time when a user uses a VR device. As shown in fig. 3, the apparatus includes:
the data acquisition module 310 is configured to acquire gesture information and eye movement data of a user to be detected;
and the vertigo detection module 320 is configured to determine an vertigo state detection result of the user to be detected according to the posture information and the eye movement data based on a vertigo detection network model trained in advance.
The embodiment of the invention is based on the acquisition of the posture information and the eyeball movement data of the user to be detected wearing the VR equipment, and the posture information and the eyeball movement data are used as the input of the pre-trained vertigo detection network model, and the output result is the vertigo state detection result of the user to be detected. Because the gesture information and the eye movement data of the user to be detected can be acquired in real time through the existing sensor in the VR equipment, and the information acquisition is convenient, the 3D vertigo of the user in the process of using the VR equipment can be detected in real time, and the equipment use experience of the user is improved.
Optionally, the data obtaining module includes a posture information obtaining unit, and is specifically configured to:
acquiring real posture information of a user to be detected through a posture detection device in the VR equipment, and/or,
and acquiring virtual attitude information of an avatar of a user to be detected in a virtual environment through a virtual sensor in the VR equipment.
Optionally, the data acquiring module includes an eye movement data acquiring unit, and is specifically configured to:
and acquiring eyeball movement data of the user to be detected through an eyeball tracking sensor in the VR equipment.
Optionally, the apparatus further comprises a stimulus source adjustment module, configured to:
adjusting a vertigo stimulus source in a virtual environment according to the vertigo state detection result; wherein the vertigo stimulus source comprises at least one of: the brightness of the virtual environment and the moving speed of the virtual image of the user to be detected in the virtual environment.
Optionally, the apparatus further includes a network model training module, configured to train the vertigo detecting network model, where the network model training module includes:
the sample set determining unit is used for determining a training sample set based on the acquired sample posture information and the sample eye movement data;
and the training unit is used for training according to the training sample set and a predetermined label dizziness state matched with the training sample to obtain a dizziness detection network model.
Optionally, the training unit is specifically configured to:
and training to obtain a dizziness detection network model according to the training sample set and a predetermined label dizziness state matched with the training sample based on the long-term and short-term memory network.
The 3D vertigo detection device provided by the embodiment of the invention can execute the 3D vertigo detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the 3D vertigo detection method.
Example four
Fig. 4 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention. Fig. 4 illustrates a block diagram of an exemplary device 12 suitable for use in implementing embodiments of the present invention. The device 12 shown in fig. 4 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present invention.
As shown in FIG. 4, device 12 is in the form of a general purpose computing device. The components of device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory device 28, and a bus 18 that couples various system components including the system memory device 28 and the processing unit 16.
The system storage 28 may include computer system readable media in the form of volatile storage, such as Random Access Memory (RAM)30 and/or cache storage 32. Device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Storage 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in storage 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system storage device 28, for example, to implement the method for detecting 3D vertigo provided by the embodiment of the invention, which includes:
acquiring posture information and eyeball movement data of a user to be detected;
and determining the vertigo state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained vertigo detection network model.
EXAMPLE five
The fifth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for detecting 3D vertigo, which includes:
acquiring posture information and eyeball movement data of a user to be detected;
and determining the vertigo state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained vertigo detection network model.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A method for detecting 3D vertigo, which comprises:
acquiring posture information and eyeball movement data of a user to be detected;
and determining the vertigo state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained vertigo detection network model.
2. The method according to claim 1, characterized by acquiring gesture information of a user to be detected, comprising;
acquiring real posture information of a user to be detected through a posture detection device in the VR equipment, and/or,
and acquiring virtual attitude information of an avatar of a user to be detected in a virtual environment through a virtual sensor in the VR equipment.
3. The method of claim 1, wherein obtaining eye movement data of a user to be detected comprises:
and acquiring eyeball movement data of the user to be detected through an eyeball tracking sensor in the VR equipment.
4. The method of claim 1, further comprising:
adjusting a vertigo stimulus source in a virtual environment according to the vertigo state detection result; wherein the vertigo stimulus source comprises at least one of: the brightness of the virtual environment and the moving speed of the virtual image of the user to be detected in the virtual environment.
5. The method of claim 1, wherein the training process of the vertigo detection network model comprises:
determining a training sample set based on the obtained sample posture information and the sample eye movement data;
and training according to the training sample set and a predetermined label dizziness state matched with the training sample to obtain a dizziness detection network model.
6. The method of claim 5, wherein training a vertigo detection network model according to the training sample set and a predetermined labeled vertigo state matching with the training samples comprises:
and training to obtain a dizziness detection network model according to the training sample set and a predetermined label dizziness state matched with the training sample based on the long-term and short-term memory network.
7. A detection device for 3D vertigo, comprising:
the data acquisition module is used for acquiring the posture information and the eyeball movement data of the user to be detected;
and the dizziness detection module is used for determining the dizziness state detection result of the user to be detected according to the posture information and the eyeball motion data based on a pre-trained dizziness detection network model.
8. The apparatus according to claim 7, wherein the data acquisition module comprises a pose information acquisition unit, and is specifically configured to:
acquiring real posture information of a user to be detected through a posture detection device in the VR equipment, and/or,
and acquiring virtual attitude information of an avatar of a user to be detected in a virtual environment through a virtual sensor in the VR equipment.
9. An apparatus, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more programs, cause the one or more processors to implement the method of detecting 3D vertigo as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for 3D vertigo detection as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010754321.6A CN111933277A (en) | 2020-07-30 | 2020-07-30 | Method, device, equipment and storage medium for detecting 3D vertigo |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010754321.6A CN111933277A (en) | 2020-07-30 | 2020-07-30 | Method, device, equipment and storage medium for detecting 3D vertigo |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111933277A true CN111933277A (en) | 2020-11-13 |
Family
ID=73314427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010754321.6A Pending CN111933277A (en) | 2020-07-30 | 2020-07-30 | Method, device, equipment and storage medium for detecting 3D vertigo |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111933277A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283612A (en) * | 2021-06-21 | 2021-08-20 | 西交利物浦大学 | Method, device and storage medium for detecting dizziness degree of user in virtual environment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170105905A (en) * | 2016-03-11 | 2017-09-20 | 주식회사 그루크리에이티브랩 | Method and apparatus for analyzing virtual reality content |
CN206601680U (en) * | 2016-11-15 | 2017-10-31 | 北京当红齐天国际文化发展集团有限公司 | Dizzy system is prevented based on sterically defined virtual reality |
CN108710206A (en) * | 2018-05-08 | 2018-10-26 | 苏州市启献智能科技有限公司 | A kind of method and apparatus of anti-dazzle and visual fatigue applied to VR displays |
CN109167989A (en) * | 2018-10-19 | 2019-01-08 | 广州土圭垚信息科技有限公司 | A kind of VR method for processing video frequency and system |
CN109478331A (en) * | 2016-07-06 | 2019-03-15 | 三星电子株式会社 | Display device and method for image procossing |
CN110280014A (en) * | 2019-05-21 | 2019-09-27 | 西交利物浦大学 | The method of spinning sensation is reduced under a kind of reality environment |
CN111103688A (en) * | 2019-12-11 | 2020-05-05 | 塔普翊海(上海)智能科技有限公司 | Anti-dizzy device, system and method |
-
2020
- 2020-07-30 CN CN202010754321.6A patent/CN111933277A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170105905A (en) * | 2016-03-11 | 2017-09-20 | 주식회사 그루크리에이티브랩 | Method and apparatus for analyzing virtual reality content |
CN109478331A (en) * | 2016-07-06 | 2019-03-15 | 三星电子株式会社 | Display device and method for image procossing |
CN206601680U (en) * | 2016-11-15 | 2017-10-31 | 北京当红齐天国际文化发展集团有限公司 | Dizzy system is prevented based on sterically defined virtual reality |
CN108710206A (en) * | 2018-05-08 | 2018-10-26 | 苏州市启献智能科技有限公司 | A kind of method and apparatus of anti-dazzle and visual fatigue applied to VR displays |
CN109167989A (en) * | 2018-10-19 | 2019-01-08 | 广州土圭垚信息科技有限公司 | A kind of VR method for processing video frequency and system |
CN110280014A (en) * | 2019-05-21 | 2019-09-27 | 西交利物浦大学 | The method of spinning sensation is reduced under a kind of reality environment |
CN111103688A (en) * | 2019-12-11 | 2020-05-05 | 塔普翊海(上海)智能科技有限公司 | Anti-dizzy device, system and method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283612A (en) * | 2021-06-21 | 2021-08-20 | 西交利物浦大学 | Method, device and storage medium for detecting dizziness degree of user in virtual environment |
CN113283612B (en) * | 2021-06-21 | 2023-09-12 | 西交利物浦大学 | Method, device and storage medium for detecting user dizziness degree in virtual environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5517021A (en) | Apparatus and method for eye tracking interface | |
US5360971A (en) | Apparatus and method for eye tracking interface | |
US11301775B2 (en) | Data annotation method and apparatus for enhanced machine learning | |
CN110430802B (en) | Neurological disease diagnosis device and method using virtual reality | |
US10741175B2 (en) | Systems and methods for natural language understanding using sensor input | |
CN112673608A (en) | Apparatus, method and program for determining cognitive state of user of mobile device | |
CN110555426A (en) | Sight line detection method, device, equipment and storage medium | |
JP4868360B2 (en) | Interest trend information output device, interest trend information output method, and program | |
US11903712B2 (en) | Physiological stress of a user of a virtual reality environment | |
CN111933277A (en) | Method, device, equipment and storage medium for detecting 3D vertigo | |
KR20160068447A (en) | Method for determining region of interest of image and device for determining region of interest of image | |
CN115132364B (en) | Myopia risk determination method and device, storage medium and wearable device | |
JP2008046802A (en) | Interaction information output device, interaction information output method and program | |
KR20190085604A (en) | Method, apparatus and computer program for recognition of a user activity | |
US20220327956A1 (en) | Language teaching machine | |
Hosp et al. | States of confusion: Eye and head tracking reveal surgeons’ confusion during arthroscopic surgery | |
Wan et al. | A comprehensive head-mounted eye tracking review: software solutions, applications, and challenges | |
CN115762772B (en) | Method, device, equipment and storage medium for determining emotional characteristics of target object | |
Sharma et al. | Requirement analysis and sensor specifications–First version | |
Justiss | A low cost eye-tracking system for human-computer interface | |
Kocejko et al. | EMG and gaze based interaction with graphic interface of smart glasses application | |
JP2024046865A (en) | Information processing device and phobia improvement support method | |
JP2024046866A (en) | Information processing device and phobia improvement support method | |
WO2023043646A1 (en) | Providing directional awareness indicators based on context | |
JP2024046867A (en) | Information processing device and phobia improvement support method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |