CN112773357A - Image processing method for measuring virtual reality dizziness degree - Google Patents
Image processing method for measuring virtual reality dizziness degree Download PDFInfo
- Publication number
- CN112773357A CN112773357A CN202011613502.3A CN202011613502A CN112773357A CN 112773357 A CN112773357 A CN 112773357A CN 202011613502 A CN202011613502 A CN 202011613502A CN 112773357 A CN112773357 A CN 112773357A
- Authority
- CN
- China
- Prior art keywords
- central
- offset
- image
- virtual reality
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 208000002173 dizziness Diseases 0.000 title claims description 14
- 208000012886 Vertigo Diseases 0.000 claims abstract description 48
- 231100000889 vertigo Toxicity 0.000 claims abstract description 48
- 238000012360 testing method Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 24
- 230000004927 fusion Effects 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 39
- 238000005259 measurement Methods 0.000 claims description 13
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 description 11
- 238000011156 evaluation Methods 0.000 description 5
- 210000003128 head Anatomy 0.000 description 5
- 210000004556 brain Anatomy 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001720 vestibular Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000012260 Accidental injury Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4005—Detecting, measuring or recording for evaluating the nervous system for evaluating the sensory system
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/70—Means for positioning the patient in relation to the detecting, measuring or recording means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Social Psychology (AREA)
- General Physics & Mathematics (AREA)
- Neurosurgery (AREA)
- Neurology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image processing method for measuring virtual reality vertigo degree, which comprises the following steps of A, processing an initial image, extracting auxiliary characteristic points and central characteristic points in an initial state image, and constructing a standard characteristic vector; B. processing the test image, extracting auxiliary characteristic points and central characteristic points in the test state image, and constructing an offset characteristic vector; C. and image fusion, namely fusing the standard characteristic vector and the offset characteristic vector to output an offset parameter for measuring the vertigo degree. The scheme adopts a limb model splitting mode, simplifies complex graphic analysis into graphic vector analysis, reduces data processing, reduces the requirement on hardware and improves the data processing efficiency.
Description
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to an image processing method for measuring virtual reality dizziness degree.
Background
When experiencing the virtual reality technology, the existing virtual reality equipment can not completely simulate the motion trail perceived by the human brain, so that when a user sees that a scene moves in the virtual reality head display, the brain still reminds the user that the scene is in a static state, unnatural acceleration changes are frequently and irregularly received in the process, and the user feels dizzy during the process. That is to say, the head movement sensed by the vestibular organ of the user is inconsistent with the movement observed by the human eyes in the virtual reality head display, and the brain can not adapt to the information conflict, thus causing vertigo. The vertigo is a non-negligible problem of the virtual reality technology, and along with the continuous development of the virtual reality industry in software, hardware and related technologies, the vertigo feeling can be always used as an important performance evaluation index. Therefore, the virtual reality dizziness degree is quantified, and the virtual reality dizziness degree evaluation method is beneficial to evaluating and grading the experience feeling of virtual reality application software such as games and teaching and training programs, the performance of virtual reality hardware such as head displays, and the development level of virtual reality technology such as rendering processing technology. The virtual reality vertigo degree early warning method and the virtual reality vertigo degree early warning system have the advantages that an objective evaluation index and an vertigo grading index are given for the vertigo degree caused by the virtual reality, and the objective evaluation index and the vertigo grading index are used for indicating possible vertigo and vertigo degree early warning for a user in advance, so that the user can conveniently select the vertigo degree of the virtual reality application software and hardware, and prepare in advance, and avoid uncomfortable states and even unnecessary accidental injuries. Meanwhile, the method can be used as an evaluation tool and a classification index of the vertigo degree of the existing or to-be-developed virtual reality software and hardware and related technologies.
The existing invention researches are mainly started from the aspects of improving related virtual reality equipment, virtual reality technology and method and the like, and the aim of improving virtual reality dizziness is to improve virtual reality dizziness. However, since the virtual reality is mainly applied to meet and aim at the demands of games, education and teaching, technical training, special situation experience and the like, the content of the virtual reality is easy to make people dizzy, such as 'VR roller coaster', and the user visually experiences the rapid change of scenes, but vestibular organs feel the static state of the head of the user under the real condition, and the brain cannot adapt to the information conflict, so that the dizzy is caused. The virtual reality head display fixes the eyes and the display screen at a certain distance by using the lenses, so that when objects at different distances are observed, the lenses are not changed, convergence conflict is caused, and further vertigo of a certain degree is caused. And the problems of real and virtual parallax, picture delay smear, asynchronous depth of field and the like can also cause dizziness. Even many users themselves are easy to perceive vertigo, and visible vertigo due to virtual reality cannot be avoided.
Therefore, it is necessary to measure and quantify the inclination of the body imbalance caused by vertigo when the vertigo is directly seen. The vertigo degree measurement can be theoretically realized based on the user image, but the vertigo degree measurement technology based on the user image cannot be realized due to the problems of complex processing, low data processing efficiency, high requirement on hardware and the like in the current image processing technology, so that an image processing method suitable for virtual reality vertigo degree measurement is needed.
Disclosure of Invention
The invention aims to solve the problems and provides an image processing method for measuring the virtual reality dizziness degree.
In order to achieve the purpose, the invention adopts the following technical scheme:
an image processing method for measuring virtual reality dizziness degree comprises the following steps,
A. processing the initial image, extracting auxiliary characteristic points and central characteristic points in the initial state image, and constructing a standard characteristic vector;
B. processing the test image, extracting auxiliary characteristic points and central characteristic points in the test state image, and constructing an offset characteristic vector;
C. and image fusion, namely fusing the standard characteristic vector and the offset characteristic vector to output an offset parameter for measuring the vertigo degree.
In the above-described image processing method for measuring the virtual reality vertigo degree, in the method a and the method B, the auxiliary feature point and the central feature point are both determined to be one.
In the above-described image processing method for virtual reality vertigo degree measurement, in the method a and the method B, the central skeleton point of the left or right shoulder is extracted as an auxiliary feature point, and the center point between the shoulders is extracted as a central feature point.
In the above image processing method for measuring the virtual reality vertigo degree, in the method a and the method B, the number of the auxiliary feature points is determined to be two, and the number of the central feature point is determined to be one.
In the above image processing method for measuring virtual reality vertigo degree, in the method a and the method B, a central skeleton point of the left or right shoulder and a central point of the chest are extracted as auxiliary feature points, respectively, and a central point between the two shoulders is extracted as a central feature point.
In the image processing method for measuring the virtual reality vertigo degree, in the method B, if the station position of the user in the test state is displaced from the station position in the initial state, the auxiliary feature point and the central feature point in the test state image are corrected according to the station position offset to construct the offset feature vector.
In the image processing method for measuring the virtual reality vertigo degree, whether the station position of the user in the test state has displacement relative to the station position in the initial state is judged by the following method:
and acquiring the standing position of the user in the initial state and the standing position of the user in the test state through the station coordinate acquisition network of the user to acquire the station offset, and judging that the displacement exists when the station offset is greater than the offset threshold.
In the above image processing method for measuring virtual reality vertigo degree, in the method C, image fusion and outputting of the offset parameter are realized by:
C1. establishing two layers respectively containing a standard feature vector and an offset feature vector;
C2. superposing the two image layers to obtain a position offset value of the central characteristic point in the test state compared with the central characteristic point in the initial state and an included angle between the offset characteristic vector and the standard characteristic vector;
C3. and calculating a product value of the sine value of the included angle and the standard characteristic vector, and adding the obtained position deviation value and the half product value to obtain the deviation parameter.
In the above-described image processing method for virtual reality vertigo degree measurement, in method a, a standard feature vector is constructed by:
A1. carrying out data preprocessing on the initial state image and then carrying out image contour detection;
A2. extracting the auxiliary feature points, the central feature points and the mass center, and acquiring coordinates of the auxiliary feature points, the central feature points and the mass center under a pixel coordinate system;
A3. and converting the coordinates under the pixel coordinate system into the coordinates under the world coordinate system, solving the pose according to the coordinates under the world coordinate system, and constructing a standard feature vector.
In method B, the offset eigenvector is constructed by:
B1. carrying out data preprocessing on the test state image and then carrying out image contour detection;
B2. extracting the auxiliary feature points, the central feature points and the mass center, and acquiring coordinates of the auxiliary feature points, the central feature points and the mass center under a pixel coordinate system;
B3. and converting the coordinates under the pixel coordinate system into the coordinates under the world coordinate system, solving the pose according to the coordinates under the world coordinate system, and constructing an offset characteristic vector.
In the above image processing method for measuring virtual reality vertigo degree, in steps a1 and B1, the data preprocessing includes data denoising and enhancing, and image contour detection is performed by a corresponding Python program.
The invention has the advantages that: a limb model splitting mode is adopted, a central skeleton point of the left/right shoulders is used as an auxiliary feature point, a midpoint between the two shoulders is used as a central feature point to form a standard feature vector and an offset feature vector, so that complicated graphic analysis is simplified into graphic vector analysis, data processing is reduced, the requirement on hardware is lowered, and the data processing efficiency is improved; under the condition that a user has station offset, correcting the central characteristic point and the auxiliary characteristic point of the test state image and then constructing an offset characteristic vector, avoiding processing result errors caused by the station offset and ensuring the accuracy of offset parameters output by an image processing result.
Drawings
FIG. 1 is a first flowchart of an image processing method for measuring virtual reality vertigo degree according to the present invention;
FIG. 2 is a first schematic diagram of the positions of the feature points of the present invention on the human body;
FIG. 3 is a schematic diagram II of the positions of the characteristic points of the present invention on the human body;
FIG. 4 is a flow chart of an image processing method for measuring the vertigo degree in the virtual reality according to the second embodiment of the present invention;
fig. 5 is a schematic distribution diagram of a station coordinate acquisition network on a test bench according to the present invention.
Reference numerals: a test bench 5; a sensor 6; a standing area auxiliary limiting device 7; a central feature point 8; the auxiliary feature points 9.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The virtual reality technology is an interactive visual simulation technology which is generated by a computer and enables an experiencer to generate an immersive sensation through the actions of vision, hearing, touch and the like. When the user uses virtual reality application software or hardware, the user may feel vertigo of different degrees due to conflict between a vestibular system and visual senses, so that a body balance mechanism is broken, inclination states of different degrees occur, and the inclination degree of the body can be measured and quantified through graphic processing, so that the vertigo degree grade can be evaluated.
As shown in fig. 1, this embodiment proposes an image processing method for measuring a virtual reality vertigo degree, including:
A. processing the initial image, extracting auxiliary characteristic points and central characteristic points in the initial state image, and constructing a standard characteristic vector;
B. processing the test image, extracting auxiliary characteristic points and central characteristic points in the test state image, and constructing an offset characteristic vector;
C. and image fusion, namely fusing the standard characteristic vector and the offset characteristic vector to output an offset parameter for measuring the vertigo degree.
Specifically, as shown in fig. 2, in method a and method B, the assistant feature point and the center feature point are both determined as one, and the left or right shoulder center skeleton point is extracted as the assistant feature point, and the center point between the shoulders is extracted as the center feature point.
Alternatively, as shown in fig. 3, two auxiliary feature points are determined, one central feature point is determined, the central bone point of the left or right shoulder and the central point of the chest are extracted as the auxiliary feature points, and the central point between the two shoulders is extracted as the central feature point.
Further, as shown in fig. 4, in the method B, if the station in the test state of the user has a displacement relative to the station in the initial state, the offset feature vector is constructed after the auxiliary feature point and the central feature point in the test state image are corrected according to the station offset.
Specifically, whether the station position in the user test state has displacement relative to the station position in the initial state is judged by the following method:
and acquiring the standing position of the user in the initial state and the standing position of the user in the test state through the station coordinate acquisition network of the user to acquire the station offset, and judging that the displacement exists when the station offset is greater than the offset threshold. For example, the standing position of the user in the initial state is in the middle position of the station coordinate acquisition network, the standing position of the user in the current test state is on the rightmost side of the station coordinate acquisition network, the distance from the middle position to the rightmost side is 30cm, the offset threshold is set to be 3cm, if the offset threshold is greater than 3, the user is judged that the displacement exists, and the auxiliary characteristic point and the central characteristic point are offset to the left side by 30cm integrally so as to correct the characteristic point.
Specifically, as shown in fig. 5, the station coordinate acquisition network includes sensors 6 arranged on the test table 5 in a checkered manner, the sensors 6 are connected to the control module 2 of the system, such as a computer, and the distribution of the sensors 6 on the test table 5 is stored in the control module 2. The sensor 6 is any one of a temperature sensor, a pressure sensor or a photoelectric sensor, when the sensor is the photoelectric sensor, a non-covered window or a transparent window for external light to irradiate the photoelectric sensor is arranged on the upper surface of the test platform right above the photoelectric sensor, or an upper cover of the test platform with a transparent structure is directly adopted. For example, in the case of a temperature sensor, when a temperature change is received, it is possible to know which position of the temperature sensor 6 has changed so as to know the position where the user stands and the current position offset from the initial position.
Further, in this embodiment, it is preferable to set a station coordinate collection area on the test bench 5, the sensor 6 is disposed in the station coordinate collection area, and the station coordinate collection area has an auxiliary stop device 7 for the standing area in the circumferential direction, so as to prevent the user from leaving the station coordinate collection area unconsciously and causing the system to fail to collect the position where the user is located.
Specifically, in method a, a standard feature vector is constructed by:
A1. carrying out data preprocessing including data denoising and enhancing on the initial state image, and then carrying out image contour detection through a corresponding Python program;
A2. extracting the auxiliary feature points, the central feature points and the mass center, and acquiring coordinates of the auxiliary feature points, the central feature points and the mass center under a pixel coordinate system;
A3. and converting the coordinates under the pixel coordinate system into the coordinates under the world coordinate system, solving the pose according to the coordinates under the world coordinate system, and constructing a standard feature vector.
In method B, the offset eigenvector is constructed by:
B1. carrying out data preprocessing including data denoising and enhancing on the test state image, and then carrying out image contour detection through a corresponding Python program;
B2. extracting the auxiliary feature points, the central feature points and the mass center, and acquiring coordinates of the auxiliary feature points, the central feature points and the mass center under a pixel coordinate system;
B3. and converting the coordinates under the pixel coordinate system into the coordinates under the world coordinate system, solving the pose according to the coordinates under the world coordinate system, and constructing an offset characteristic vector.
In method C, image fusion and output of offset parameters are achieved by:
C1. establishing two layers respectively containing a standard feature vector and an offset feature vector;
C2. superposing the two image layers to obtain a position offset value of the central characteristic point in the test state compared with the central characteristic point in the initial state and an included angle between the offset characteristic vector and the standard characteristic vector;
C3. and calculating a product value of the sine value of the included angle and the standard characteristic vector, and adding the obtained position deviation value and the half product value to obtain the deviation parameter.
The scheme is used for processing images in the virtual reality vertigo degree measuring process, a limb model splitting mode is adopted, a central skeleton point of the left/right shoulders is used as an auxiliary characteristic point, a midpoint between the two shoulders is used as a central characteristic point to form a standard characteristic vector and an offset characteristic vector, so that complicated graphic analysis is simplified into graphic vector analysis, data processing is reduced, the requirement on hardware is lowered, and the data processing efficiency is improved; under the condition that a user has station offset, correcting the central characteristic point and the auxiliary characteristic point of the test state image and then constructing an offset characteristic vector, avoiding processing result errors caused by the station offset and ensuring the accuracy of offset parameters output by an image processing result.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Although test station 5 is used more herein; a sensor 6; standing zone auxiliary stop 7, etc., but does not exclude the possibility of using other terms. These terms are used merely to more conveniently describe and explain the nature of the present invention; they are to be construed as being without limitation to any additional limitations that may be imposed by the spirit of the present invention.
Claims (10)
1. An image processing method for measuring virtual reality dizziness degree is characterized by comprising the following steps,
A. processing the initial image, extracting auxiliary characteristic points and central characteristic points in the initial state image, and constructing a standard characteristic vector;
B. processing the test image, extracting auxiliary characteristic points and central characteristic points in the test state image, and constructing an offset characteristic vector;
C. and image fusion, namely fusing the standard characteristic vector and the offset characteristic vector to output an offset parameter for measuring the vertigo degree.
2. The image processing method for virtual reality vertigo degree measurement according to claim 1, wherein in method a and method B, said auxiliary feature point and said central feature point are both determined as one.
3. The image processing method for virtual reality vertigo degree measurement according to claim 2, wherein in method a and method B, a left or right shoulder central skeleton point is extracted as an auxiliary feature point, and a central point between two shoulders is extracted as a central feature point.
4. The image processing method for virtual reality vertigo degree measurement according to claim 1, wherein in method a and method B, the number of the auxiliary feature points is determined to be two, and the number of the central feature point is determined to be one.
5. The image processing method for virtual reality vertigo degree measurement according to claim 4, wherein in method A and method B, a left or right shoulder central skeleton point and a chest central point are extracted as auxiliary feature points, respectively, and a central point between two shoulders is extracted as a central feature point.
6. The image processing method for virtual reality vertigo degree measurement according to any one of claims 1-5, wherein in method B, if the station position of the user in the test state has a displacement relative to the station position in the initial state, the auxiliary feature point and the central feature point in the test state image are corrected according to the station position offset to construct an offset feature vector.
7. The image processing method for virtual reality vertigo degree measurement according to claim 6, wherein whether the station position in the user test state has displacement relative to the station position in the initial state is judged by:
and acquiring the standing position of the user in the initial state and the standing position of the user in the test state through the station coordinate acquisition network of the user to acquire the station offset, and judging that the displacement exists when the station offset is greater than the offset threshold.
8. The image processing method for virtual reality vertigo degree measurement according to claim 1, wherein in method C, image fusion and outputting of offset parameters are implemented by:
C1. establishing two layers respectively containing a standard feature vector and an offset feature vector;
C2. superposing the two image layers to obtain a position offset value of the central characteristic point in the test state compared with the central characteristic point in the initial state and an included angle between the offset characteristic vector and the standard characteristic vector;
C3. and calculating a product value of the sine value of the included angle and the standard characteristic vector, and adding the obtained position deviation value and the half product value to obtain the deviation parameter.
9. The image processing method for virtual reality vertigo degree measurement according to claim 1, wherein in method a, a standard feature vector is constructed by:
A1. carrying out data preprocessing on the initial state image and then carrying out image contour detection;
A2. extracting the auxiliary feature points, the central feature points and the mass center, and acquiring coordinates of the auxiliary feature points, the central feature points and the mass center under a pixel coordinate system;
A3. and converting the coordinates under the pixel coordinate system into the coordinates under the world coordinate system, solving the pose according to the coordinates under the world coordinate system, and constructing a standard feature vector.
In method B, the offset eigenvector is constructed by:
B1. carrying out data preprocessing on the test state image and then carrying out image contour detection;
B2. extracting the auxiliary feature points, the central feature points and the mass center, and acquiring coordinates of the auxiliary feature points, the central feature points and the mass center under a pixel coordinate system;
B3. and converting the coordinates under the pixel coordinate system into the coordinates under the world coordinate system, solving the pose according to the coordinates under the world coordinate system, and constructing an offset characteristic vector.
10. The method as claimed in claim 9, wherein in steps a1 and B1, the data preprocessing includes data de-noising and enhancement, and the image contour detection is performed by using a corresponding Python program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011613502.3A CN112773357A (en) | 2020-12-30 | 2020-12-30 | Image processing method for measuring virtual reality dizziness degree |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011613502.3A CN112773357A (en) | 2020-12-30 | 2020-12-30 | Image processing method for measuring virtual reality dizziness degree |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112773357A true CN112773357A (en) | 2021-05-11 |
Family
ID=75754025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011613502.3A Pending CN112773357A (en) | 2020-12-30 | 2020-12-30 | Image processing method for measuring virtual reality dizziness degree |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112773357A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283612A (en) * | 2021-06-21 | 2021-08-20 | 西交利物浦大学 | Method, device and storage medium for detecting dizziness degree of user in virtual environment |
CN113301307A (en) * | 2021-05-25 | 2021-08-24 | 苏州昆承智能车检测科技有限公司 | Video stream fusion method and system based on radar camera |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104536579A (en) * | 2015-01-20 | 2015-04-22 | 刘宛平 | Interactive three-dimensional scenery and digital image high-speed fusing processing system and method |
-
2020
- 2020-12-30 CN CN202011613502.3A patent/CN112773357A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104536579A (en) * | 2015-01-20 | 2015-04-22 | 刘宛平 | Interactive three-dimensional scenery and digital image high-speed fusing processing system and method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113301307A (en) * | 2021-05-25 | 2021-08-24 | 苏州昆承智能车检测科技有限公司 | Video stream fusion method and system based on radar camera |
CN113301307B (en) * | 2021-05-25 | 2022-07-12 | 苏州昆承智能车检测科技有限公司 | Video stream fusion method and system based on radar camera |
CN113283612A (en) * | 2021-06-21 | 2021-08-20 | 西交利物浦大学 | Method, device and storage medium for detecting dizziness degree of user in virtual environment |
CN113283612B (en) * | 2021-06-21 | 2023-09-12 | 西交利物浦大学 | Method, device and storage medium for detecting user dizziness degree in virtual environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107656613B (en) | Human-computer interaction system based on eye movement tracking and working method thereof | |
CN109598798B (en) | Virtual object fitting method and virtual object fitting service system | |
Swan et al. | A perceptual matching technique for depth judgments in optical, see-through augmented reality | |
CN112069933A (en) | Skeletal muscle stress estimation method based on posture recognition and human body biomechanics | |
CN111729283B (en) | Training system and method based on mixed reality technology | |
Livingston et al. | Pursuit of “X-ray vision” for augmented reality | |
CN112773357A (en) | Image processing method for measuring virtual reality dizziness degree | |
CN113366491B (en) | Eyeball tracking method, device and storage medium | |
KR20170002100A (en) | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same | |
CN110717391A (en) | Height measuring method, system, device and medium based on video image | |
CN107067456A (en) | A kind of virtual reality rendering method optimized based on depth map | |
CN113421346A (en) | Design method of AR-HUD head-up display interface for enhancing driving feeling | |
CN114333046A (en) | Dance action scoring method, device, equipment and storage medium | |
Haggag et al. | Body parts segmentation with attached props using rgb-d imaging | |
CN110348370B (en) | Augmented reality system and method for human body action recognition | |
CN112656404B (en) | System and method for measuring virtual reality dizziness degree based on image processing | |
CN112933581A (en) | Sports action scoring method and device based on virtual reality technology | |
CN116935008A (en) | Display interaction method and device based on mixed reality | |
CN113239848B (en) | Motion perception method, system, terminal equipment and storage medium | |
CN108628453A (en) | Virtual reality image display methods and terminal | |
CN110097644B (en) | Expression migration method, device and system based on mixed reality and processor | |
CN113612985A (en) | Processing method of interactive VR image | |
KR20170143223A (en) | Apparatus and method for providing 3d immersive experience contents service | |
US20170302904A1 (en) | Input/output device, input/output program, and input/output method | |
Naepflin et al. | Can movement parallax compensate lacking stereopsis in spatial explorative search tasks? |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210511 |