CN112114660A - Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range - Google Patents

Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range Download PDF

Info

Publication number
CN112114660A
CN112114660A CN202010725083.6A CN202010725083A CN112114660A CN 112114660 A CN112114660 A CN 112114660A CN 202010725083 A CN202010725083 A CN 202010725083A CN 112114660 A CN112114660 A CN 112114660A
Authority
CN
China
Prior art keywords
foot
displacement
human
small space
space range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010725083.6A
Other languages
Chinese (zh)
Inventor
崔浩
李杨寰
高峰
尚泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Glonavin Information Technology Co ltd
Hunan Yehaoah Intelligent Technology Co ltd
Original Assignee
Hunan Glonavin Information Technology Co ltd
Hunan Yehaoah Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Glonavin Information Technology Co ltd, Hunan Yehaoah Intelligent Technology Co ltd filed Critical Hunan Glonavin Information Technology Co ltd
Priority to CN202010725083.6A priority Critical patent/CN112114660A/en
Publication of CN112114660A publication Critical patent/CN112114660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass

Abstract

The invention discloses a method for realizing large-scale movement of a virtual world figure by utilizing the motion of human feet in a small space range, wherein in the running process of a system, a foot displacement sensing module utilizes an inertial navigation technology to carry out real-time calculation on pedestrian displacement in a built-in MCU (microprogrammed control unit); then judging the static moment of the trip person in the motion process by a static detection technology; carrying out zero-speed correction by utilizing a Kalman filtering technology at the detected static moment to obtain an accurate pedestrian displacement result, and simultaneously obtaining gait information of a single foot by utilizing the obtained accurate pedestrian displacement result through a gait recognition algorithm; the gait information of the single foot is sent to the central processing module through Bluetooth, the central processing module obtains the gait information of the pedestrian through a biped gait fusion recognition algorithm, and finally the gait information is reported to the virtual displacement display software for real-time display.

Description

Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range
Technical Field
The invention relates to the technical field of virtual world character control, in particular to a method for realizing large-scale movement of a virtual world character by utilizing the motion of human feet in a small space range.
Background
With the technological progress in recent years, Virtual Reality (VR) technology has gradually become a research hotspot, which utilizes a computer to simulate a Virtual environment with which a user can interact through a perception organ, so that the user can generate a "real" experience in the VR. With the continuous development of VR technology, VR technology has been widely applied to the fields of military, production, medical treatment, education, entertainment, etc., such as virtual battlefield simulation training, virtual competitive games, etc. And the applications need to realize the virtualization of the motion trail of the human body. The virtual nature of the human motion trajectory is to acquire and process the motion data of the human body through various devices to realize the motion trajectory simulation.
Currently, the most widely used in commerce is the optical motion capture system. The optical motion capture system needs to wear a mark point on a tracked object and then track the mark point to realize motion capture of a human body. The optical motion capture system is based on the computer vision principle, and the task of motion capture is completed by monitoring and tracking target feature points from different angles by a plurality of high-speed cameras. When the camera is continuously taking pictures at a sufficiently high rate, the motion trajectory of the point can be derived from the sequence of images. Usually 6 to 8 cameras are used distributed around the scene, and in order to obtain more accurate motion data, the shooting rate of each camera should not be lower than 60 frames/second. The current common technical means are divided into an active form and a passive form, the main difference is the light emitting form of the mark points, the passive mark points reflect external light sources, and the active mark points are emitted by light emitting diodes.
Although the optical motion capture system is currently most widely used commercially, it has many disadvantages due to the characteristics of the technology itself, mainly including the following four aspects:
1. because the optical signal is easily interfered by illumination conditions, visual angles and shadows and is easily blocked by middle obstacles, the optical motion capture system is sensitive to light rays in a capture visual field, has high requirements on lamplight and reflection conditions of the field, and is frequently confused and shielded by mark points in the using process to cause loss of motion information or wrong calculation, so that more manual intervention in the later period is needed;
2. the marking points need to be identified, tracked and the spatial coordinates need to be calculated, so that the workload of algorithm processing is large and the real-time performance is poor;
3. the equipment is expensive, the measuring distance is limited, the installation and the positioning are more complicated, and the carrying is inconvenient;
4. in the case of a large application scene, the range of motion of people is enlarged, so that the people cannot use the device or the required installation equipment is increased, so that the cost is high and the application is limited.
Disclosure of Invention
The present invention provides a method for realizing a large-scale movement of a virtual world character by using the motion of human feet in a small space range, aiming at the above technical problems of the existing optical motion capture system.
The technical problem to be solved by the invention can be realized by the following technical scheme:
a method for realizing the large-scale movement of a virtual world character by utilizing the motion of human feet in a small space range comprises the following steps:
step 1: sensing the displacement of the human foot in a small space range and resolving the sensed displacement of the human foot in the small space range in real time to obtain IMU (inertial measurement unit) original data of the human foot in the small space range motion process;
step 2: detecting static moment data in IMU original data by a static detection method;
and step 3: performing zero-speed correction on the static moment data detected in the step 2 by using a Kalman filtering algorithm to obtain an accurate human foot displacement result;
and 4, step 4: acquiring gait information of a single foot of the human body by using the accurate human body foot displacement result obtained in the step 3 through a gait recognition algorithm;
and 5: acquiring the gait information of the human body by the gait fusion recognition algorithm of the biped of the human body obtained in the step 4;
step 6: and (5) reporting the gait information of the human body obtained in the step (5) to virtual displacement display software for real-time display.
In a preferred embodiment of the present invention, in step 1, the displacement of the human foot in the small space range includes one or a combination of any two or more of forward movement, backward movement, left side movement, right side movement, stepping in place, jumping in place, kicking, and acceleration of motion of the human leg. Preferably stepping in place.
In a preferred embodiment of the present invention, in step 1, the device for sensing the displacement of the human foot in the small space range is a foot displacement sensing module bound on the human foot.
In a preferred embodiment of the present invention, the foot displacement sensing module is a MEMS inertial device.
In a preferred embodiment of the present invention, in step 1, the steps of sensing the displacement of the human foot in the small space range and calculating the sensed displacement of the human foot in the small space range in real time to obtain the IMU raw data of the human foot in the small space range include:
step 1.1: the velocity, position and attitude of the movement of the human foot is sensed by the gyroscope and accelerometer in the MEMS inertial device, and the IMU raw data is shown in fig. 4a and 4 b.
In a preferred embodiment of the present invention, in step 1, the device for sensing the displacement of the human foot in the small space range is a foot pad provided with five sensing areas.
In a preferred embodiment of the present invention, in step 1, the device for sensing the displacement of the human foot in the small space range is an infrared sensing device or a laser sensing device.
The gait recognition is based on the premise that the complete gait can be detected, and analysis of IMU original data shows that a small section of static interval exists after each gait is finished, and the static interval is required on the premise of zero-speed correction. Therefore, the stillness detection technology is very important, and in a preferred embodiment of the present invention, the specific steps of detecting the still time data in the IMU raw data by the stillness detection method are as follows:
and calculating the detection value at each moment by a static interval detection method, and determining that the detection value is static when the detection value is smaller than a set threshold value.
The stationary interval detection method adopts an Angular velocity Energy discrimination method (Angular Rate Energy), and records that the Angular velocity at the time k is omegakAnd if the detection window is N, the sum of angular velocity energy in the window is:
Figure RE-GDA0002780869380000031
if the angular velocity is almost zero in the stationary region, the angular velocity threshold must be smaller than a threshold, so the criterion of the angular velocity energy discrimination (ARE) is:
Figure RE-GDA0002780869380000032
in a preferred embodiment of the present invention, in step 3, the specific steps of performing zero-speed correction on the static moment data detected in step 2 by using a kalman filter algorithm to obtain an accurate human foot displacement result are as follows:
the zero-speed correction algorithm estimates errors of speed, position and attitude through a Kalman filter by using observed quantity (namely, the speed is zero) obtained when the pedestrian is static, and feeds the estimated error parameters back to an inertial navigation system, so that the pedestrian displacement calculation precision is improved.
Kalman filtering is a recursive linear minimum variance estimation, and has the advantages that: in the time domain, a filter is designed by adopting a state space method, and the signal characteristics are described by adopting a state transfer equation, so that the decomposition of a signal power spectrum is avoided. Given a stochastic system state space model:
Figure RE-GDA0002780869380000041
in the formula, XkIs a state vector, ZkIs a measurement vector, phik/k-1Is a one-step transition matrix of states,k/k-1is the system noise distribution matrix, HkIs a measurement matrix, Wk-1Is the system noise vector, VkIs a measurement noise vector, both are gaussian white noise vector sequences with zero mean (obeying normal distribution), and they are not correlated with each other, namely:
Figure RE-GDA0002780869380000042
the kalman filtering process can be divided into five basic formulas, as follows:
state one-step prediction:
Figure RE-GDA0002780869380000043
state one-step prediction mean square error:
Figure RE-GDA0002780869380000044
filter gain:
Figure RE-GDA0002780869380000045
state estimation:
Figure RE-GDA0002780869380000046
state estimation mean square error: pk=(I-KkHk)Pk/k-1
The zero-speed correction algorithm utilizes the observed quantity (namely, the speed is zero) obtained in the stationary time, estimates other error parameters and feeds the estimated error parameters back to the inertial navigation system, so that the pedestrian displacement calculation precision is improved.
In a preferred embodiment of the present invention, in step 4, the specific steps of obtaining the gait information of the human single foot by using the accurate human foot displacement result obtained in step 3 through a gait recognition algorithm are as follows:
when the single foot is still, the movement of the single foot is finished or not started, and when the single foot is detected to be still, the gait information of the single foot can be judged by comparing the displacement result at the moment with the displacement result at the last time of still.
In a preferred embodiment of the present invention, in step 5, the specific step of obtaining the gait information of the human body from the gait information of the single foot of the human body obtained in step 4 by the bipedal gait fusion recognition algorithm is:
the same principle as the judgment principle of the single-foot gait information is adopted, and the obtained two pieces of single-foot gait information are compared again to finally obtain the human body gait information.
In a preferred embodiment of the present invention, in step 6, the step of reporting the gait information of the human body obtained in step 5 to the virtual displacement display software for real-time display specifically comprises:
the gait information of the human body is sent to the software through Bluetooth according to a specified protocol, after the software receives the data packet, the software firstly analyzes the data packet according to the protocol to obtain the gait information of the human body, and then displays the gait information of the human body.
Due to the adoption of the technical scheme, the invention aims to realize the requirement that a real world figure controls the virtual scene figure to move in a large range by using the technology of human motion gait, direction identification and the like based on the MEMS inertial device, reduce the space requirement, and simultaneously can obtain the actions of human jumping, kicking legs and the like and the gaits of left movement, right movement, forward movement, backward movement, stepping and the like, and technically overcome the defects caused by the optical technology.
The invention is different from the current mainstream optical motion capture system, and has the following main advantages:
(1) the device is not influenced by illumination, shading and other environmental factors;
(2) the device can be used indoors and outdoors and can work all the day;
(3) the appearance is small, the carrying is easy, the use and the operation are simple, and the price is low;
(4) the required space is small, and the application scene is not restricted.
Drawings
FIG. 1 is a schematic view of a foot displacement sensor module of the present invention being worn on a human foot.
FIG. 2 is a flow chart of the method for realizing the large-scale movement of the virtual world character by utilizing the motion of the human foot in a small space range.
Fig. 3 is a schematic diagram of a MEMS inertial device of the present invention.
FIGS. 4a and 4b are schematic views of IMU raw data during motion of a human foot according to the present invention.
FIG. 5 is a diagram illustrating the detection result of the static region of the human foot according to the present invention.
FIG. 6 is a schematic view of virtual displacement of the human foot in the forward mode according to the present invention.
Fig. 7 is a schematic view of the virtual displacement of the human foot in the backward mode according to the present invention.
FIG. 8 is a schematic view of the virtual displacement of the human foot in the left displacement mode according to the present invention.
FIG. 9 is a schematic view of the virtual displacement of the human foot in the right displacement mode according to the present invention.
FIG. 10 is a schematic view of virtual displacement of the foot of the human body in the stepping mode in situ according to the present invention.
Detailed Description
Aiming at the defects of the existing optical motion capture system, the invention adopts MEMS inertial devices and the technologies of inertial navigation, human motion gait and direction recognition and the like, and technically overcomes the defects caused by the optical technology.
The specific implementation mode of the invention realizes that the real person controls the large-scale displacement of the virtual character under the condition of small space by utilizing the MEMS inertial device, so that the application scene is not limited by the use space any more, and the virtualization of the human motion track can be realized in the small-scale space even if the application scene is large-scale. However, it is not excluded to use other devices to realize the real person to control the virtual character to move in a large range under the condition of a small space, for example, a foot pad provided with five sensing areas, an infrared sensing device or a laser sensing device for sensing the displacement of the human foot in the small space range, and the like are adopted.
The invention discloses a system for realizing the control of a virtual character to move in a large range by a real person under a small space condition by utilizing an MEMS inertial device.
Referring to fig. 1, before use, the foot displacement sensing module 10 is bound to the instep of the human foot 20, one for each of the left and right feet; connecting the central processing module with a computer through a USB interface; and installing virtual displacement display software on the computer. When the foot-moving type foot-moving sensing device is used, the feet keep still, and the two foot-moving sensing modules 10 are opened for use. In the moving process of the pedestrian, the foot displacement sensing module 10 can calculate the motion information of the single foot in real time through the built-in MCU, and sends the motion information of the single foot to the central processing module through the Bluetooth, and the central processing module performs fusion recognition after obtaining the motion information of the two feet to finally obtain the displacement information of the pedestrian and reports the information to the virtual displacement display software for real-time display.
The invention relates to a method for realizing large-scale movement of a virtual world character by utilizing the motion of human feet in a small space range, which is shown in figure 2 and adopts a flow schematic diagram of a method for realizing the large-scale movement of the virtual world character by utilizing the motion of the human feet in the small space range, and the method for realizing the large-scale movement of the virtual world character by utilizing the motion of the human feet in the small space range mainly comprises four parts: firstly, inertial navigation technology; secondly, static detection technology; thirdly, zero-speed correction based on Kalman filtering; and fourthly, a moving gait and direction recognition algorithm.
The method for realizing the large-scale movement of the virtual world character by utilizing the motion of the human foot in the small space range is described in detail by the following specific steps:
step 1: sensing the displacement of the human foot 20 in a small space range by adopting the foot displacement sensing module 10, and calculating the sensed displacement of the human foot 20 in the small space range in real time to obtain IMU (inertial measurement unit) original data of the human foot 20 in the small space range motion process (see fig. 4a and 4 b);
the method comprises the following steps: in the operation process of the system, the foot displacement sensing module 10 utilizes an inertial navigation method to perform real-time calculation of the displacement of the human foot 20 in a built-in MCU. Referring to fig. 3, the inertial navigation system of the present invention adopts the existing strapdown inertial navigation system, which is an autonomous navigation method for measuring speed, positioning and attitude of a carrier by using the measurements of a gyroscope 30 and an accelerometer 40.
When the feet of the human body step in situ, the acceleration and angular velocity information of the human body measured by the gyroscope 30 and the accelerometer 40 are error-compensated by the compensation coefficients built in the error compensation module 50, then input to the direction cosine matrix module 60 and the acceleration coordinate conversion module 70 from the carrier coordinate system to the navigation coordinate system to perform direction cosine matrix transformation and acceleration coordinate conversion from the carrier coordinate system to the navigation coordinate system, and then input to the attitude calculation module 80 and the navigation calculation module 90 to perform attitude calculation and navigation calculation, and then output attitude data, velocity data and position data, and output IMU raw data consisting of the attitude data, the velocity data and the position data through the navigation output module 100 (see fig. 4a and 4 b).
Step 2: detecting static moment data in IMU original data by a static detection method; then judging the static moment of the trip person in the motion process by a static detection technology;
the gait recognition is based on the premise that the complete gait can be detected, and analysis of IMU original data shows that a small section of static interval exists after each gait is finished, and the static interval is required on the premise of zero-speed correction. The stationary detection technique is therefore very important.
The specific steps of detecting the static moment data in the IMU original data by the static detection method are as follows:
and calculating the detection value at each moment by a static interval detection method, and determining that the detection value is static when the detection value is smaller than a set threshold value.
The stationary interval detection method adopts an Angular velocity Energy discrimination method (Angular Rate Energy), and records that the Angular velocity at the time k is omegakAnd if the detection window is N, the sum of angular velocity energy in the window is:
Figure RE-GDA0002780869380000071
if the angular velocity is almost zero in the stationary region, the angular velocity threshold must be smaller than a threshold, so the criterion of the angular velocity energy discrimination (ARE) is:
Figure RE-GDA0002780869380000081
and step 3: performing zero-speed correction on the static moment data detected in the step 2 by using a Kalman filtering algorithm to obtain an accurate human foot displacement result; the method comprises the following specific steps:
the zero-speed correction algorithm estimates errors of speed, position and attitude through a Kalman filter by using observed quantity (namely, the speed is zero) obtained when the pedestrian is static, and feeds the estimated error parameters back to an inertial navigation system, so that the pedestrian displacement calculation precision is improved.
Kalman filtering is a recursive linear minimum variance estimation, and has the advantages that: in the time domain, a filter is designed by adopting a state space method, and the signal characteristics are described by adopting a state transfer equation, so that the decomposition of a signal power spectrum is avoided. Given a stochastic system state space model:
Figure RE-GDA0002780869380000082
in the formula, XkIs a state vector, ZkIs a measurement vector, phik/k-1Is a one-step transition of stateThe matrix is shifted in a direction that is orthogonal to the direction of the motion,k/k-1is the system noise distribution matrix, HkIs a measurement matrix, Wk-1Is the system noise vector, VkIs a measurement noise vector, both are gaussian white noise vector sequences with zero mean (obeying normal distribution), and they are not correlated with each other, namely:
Figure RE-GDA0002780869380000083
the kalman filtering process can be divided into five basic formulas, as follows:
state one-step prediction:
Figure RE-GDA0002780869380000084
state one-step prediction mean square error:
Figure RE-GDA0002780869380000085
filter gain:
Figure RE-GDA0002780869380000086
state estimation:
Figure RE-GDA0002780869380000087
state estimation mean square error: pk=(I-KkHk)Pk/k-1
The zero-speed correction algorithm utilizes the observed quantity (namely, the speed is zero) obtained in the stationary time, estimates other error parameters and feeds the estimated error parameters back to the inertial navigation system, so that the pedestrian displacement calculation precision is improved.
The currently recognizable pedestrian movement modes comprise eight modes of advancing, retreating, left side moving, right side moving, stepping, in-situ jumping, kicking and movement acceleration. The eight modes are divided into two types, wherein one type is a motion mode which comprises forward movement, backward movement, left side movement and right side movement; another category is special modes, including stepping, take-off-seat, kicking, and acceleration of movement. Since the stepping mode can be identified, a motion mode keeping strategy is added into the gait identification algorithm, namely, the user can keep the existing motion mode by stepping during the use process. The space range required by the simulated scene can be greatly reduced through the strategy, so that the application scene is not limited by the simulated scene any more, and the control of the virtual character to move in a large range under the condition of small space can be realized.
And 4, step 4: and 3, obtaining the gait information of the single foot of the human body by using the accurate human body foot displacement result obtained in the step 3 through a gait recognition algorithm. The method comprises the following specific steps:
when the single foot is still, the movement of the single foot is finished or not started, and when the single foot is detected to be still, the gait information of the single foot can be judged by comparing the displacement result at the moment with the displacement result at the last time of still.
And 5: acquiring the gait information of the human body by the gait fusion recognition algorithm of the biped of the human body obtained in the step 4; the method comprises the following specific steps:
the same principle as the judgment principle of the single-foot gait information is adopted, and the obtained two pieces of single-foot gait information are compared again to finally obtain the human body gait information.
Step 6: the specific steps of reporting the human body gait information obtained in the step 5 to virtual displacement display software for real-time display are as follows:
the gait information of the human body is sent to the software through Bluetooth according to a specified protocol, after the software receives the data packet, the software firstly analyzes the data packet according to the protocol to obtain the gait information of the human body, and then displays the gait information of the human body.

Claims (12)

1. A method for realizing the large-scale movement of a virtual world character by utilizing the motion of human feet in a small space range comprises the following steps:
step 1: sensing the displacement of the human foot in a small space range and resolving the sensed displacement of the human foot in the small space range in real time to obtain IMU (inertial measurement unit) original data of the human foot in the small space range motion process;
step 2: detecting static moment data in IMU original data by a static detection method;
and step 3: performing zero-speed correction on the static moment data detected in the step 2 by using a Kalman filtering algorithm to obtain an accurate human foot displacement result;
and 4, step 4: acquiring gait information of a single foot of the human body by using the accurate human body foot displacement result obtained in the step 3 through a gait recognition algorithm;
and 5: acquiring the gait information of the human body by the gait fusion recognition algorithm of the biped of the human body obtained in the step 4;
step 6: and (5) reporting the gait information of the human body obtained in the step (5) to virtual displacement display software for real-time display.
2. The method for realizing the large-scale movement of the virtual world character by utilizing the motion of the human foot in the small space range according to the claim 1, wherein in the step 1, the displacement of the human foot in the small space range comprises one or the combination of more than two of the forward movement, the backward movement, the left side movement, the right side movement, the stepping in place, the jumping in place, the leg kicking and the acceleration of the motion of the human leg. Preferably stepping in place.
3. The method as claimed in claim 1, wherein the means for sensing the displacement of the human foot in the small space range in step 1 is a foot displacement sensing module bound on the human foot.
4. The method as claimed in claim 3, wherein the foot displacement sensor module is a MEMS inertial device.
5. The method for realizing the large-scale movement of the virtual world character by utilizing the motion of the human foot in the small space range as claimed in claim 1, wherein the steps of sensing the displacement of the human foot in the small space range and calculating the sensed displacement of the human foot in the small space range in real time to obtain the IMU raw data of the motion process of the human foot in the small space range in step 1 are as follows:
step 1.1: sensing the moving speed, position and posture of the foot part of the human body by using a gyroscope and an accelerometer in the MEMS inertial device;
IMU raw data is as in fig. 4a and 4 b.
6. The method as claimed in claim 1, wherein the means for sensing the displacement of the human foot in the small space range in step 1 is a foot pad having five sensing areas.
7. The method for achieving the large-scale movement of the virtual world character through the small-space range motion of the human foot according to claim 1, wherein in the step 1, the device for sensing the small-space range displacement of the human foot is an infrared sensing device or a laser sensing device.
8. The method for realizing the large-scale movement of the character in the virtual world by utilizing the motion of the human foot in the small space range as claimed in claim 1, wherein the step 2 of detecting the static moment data in the IMU raw data by the static detection method comprises the following specific steps:
and calculating the detection value at each moment by a static interval detection method, and determining that the detection value is static when the detection value is smaller than a set threshold value.
The stationary interval detection method adopts an Angular velocity Energy discrimination method (Angular Rate Energy), and records that the Angular velocity at the time k is omegakAnd if the detection window is N, the sum of angular velocity energy in the window is:
Figure FDA0002601376680000021
if the angular velocity is almost zero in the stationary region, the angular velocity threshold must be smaller than a threshold, so the criterion of the angular velocity energy discrimination (ARE) is:
Figure FDA0002601376680000022
9. the method for realizing the large-scale movement of the virtual world character by utilizing the motion of the human foot in the small space range according to the claim 1, wherein in the step 3, the specific steps of performing zero-speed correction on the static moment data detected in the step 2 by utilizing a Kalman filtering algorithm to obtain an accurate human foot displacement result are as follows:
the zero-speed correction algorithm estimates errors of speed, position and attitude through a Kalman filter by using observed quantity (namely, the speed is zero) obtained when the pedestrian is static, and feeds the estimated error parameters back to an inertial navigation system, so that the pedestrian displacement calculation precision is improved.
Kalman filtering is a recursive linear minimum variance estimation, and has the advantages that: in the time domain, a filter is designed by adopting a state space method, and the signal characteristics are described by adopting a state transfer equation, so that the decomposition of a signal power spectrum is avoided. Given a stochastic system state space model:
Figure FDA0002601376680000031
in the formula, XkIs a state vector, ZkIs a measurement vector, phik/k-1Is a one-step transition matrix of states,k/k-1is the system noise distribution matrix, HkIs a measurement matrix, Wk-1Is the system noise vector, VkIs amount ofThe noise vector measurement and noise vector measurement are both Gaussian white noise vector sequences with zero mean (obeying normal distribution) and are mutually uncorrelated, namely, the following conditions are satisfied:
Figure FDA0002601376680000032
the kalman filtering process can be divided into five basic formulas, as follows:
state one-step prediction:
Figure FDA0002601376680000033
state one-step prediction mean square error:
Figure FDA0002601376680000034
filter gain:
Figure FDA0002601376680000035
state estimation:
Figure FDA0002601376680000036
state estimation mean square error: pk=(I-KkHk)Pk/k-1
The zero-speed correction algorithm utilizes the observed quantity (namely, the speed is zero) obtained in the stationary time, estimates other error parameters and feeds the estimated error parameters back to the inertial navigation system, so that the pedestrian displacement calculation precision is improved.
10. The method for achieving the large-scale movement of the virtual world character through the motion of the human foot in the small space range according to the claim 1, wherein in the step 4, the specific steps of obtaining the gait information of the human single foot through the gait recognition algorithm by using the accurate human foot displacement result obtained in the step 3 are as follows:
when the single foot is still, the movement of the single foot is finished or not started, and when the single foot is detected to be still, the gait information of the single foot can be judged by comparing the displacement result at the moment with the displacement result at the last time of still.
11. The method for achieving the large-scale movement of the virtual world character by utilizing the motion of the human foot in the small space range according to claim 1, wherein in the step 5, the step of obtaining the gait information of the human body by the gait fusion recognition algorithm of the biped through the gait information of the single foot of the human body obtained in the step 4 comprises the following specific steps:
the same principle as the judgment principle of the single-foot gait information is adopted, and the obtained two pieces of single-foot gait information are compared again to finally obtain the human body gait information.
12. The method for realizing the large-scale movement of the virtual world figure by utilizing the motion of the human foot in the small space range as claimed in claim 1, wherein in the step 6, the step of reporting the gait information of the human body obtained in the step 5 to the virtual displacement display software for real-time display comprises the following specific steps:
the gait information of the human body is sent to the software through Bluetooth according to a specified protocol, after the software receives the data packet, the software firstly analyzes the data packet according to the protocol to obtain the gait information of the human body, and then displays the gait information of the human body.
CN202010725083.6A 2020-07-24 2020-07-24 Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range Pending CN112114660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010725083.6A CN112114660A (en) 2020-07-24 2020-07-24 Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010725083.6A CN112114660A (en) 2020-07-24 2020-07-24 Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range

Publications (1)

Publication Number Publication Date
CN112114660A true CN112114660A (en) 2020-12-22

Family

ID=73799696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010725083.6A Pending CN112114660A (en) 2020-07-24 2020-07-24 Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range

Country Status (1)

Country Link
CN (1) CN112114660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857394A (en) * 2021-01-05 2021-05-28 广州偶游网络科技有限公司 Intelligent shoe and action recognition method, device and storage medium thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857394A (en) * 2021-01-05 2021-05-28 广州偶游网络科技有限公司 Intelligent shoe and action recognition method, device and storage medium thereof

Similar Documents

Publication Publication Date Title
US11481024B2 (en) Six degree of freedom tracking with scale recovery and obstacle avoidance
US9401025B2 (en) Visual and physical motion sensing for three-dimensional motion capture
Tian et al. Accurate human navigation using wearable monocular visual and inertial sensors
AU2009240847B2 (en) Three-dimensional motion capture
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
US20100194879A1 (en) Object motion capturing system and method
Yuan et al. 3-D localization of human based on an inertial capture system
CN111353355B (en) Motion tracking system and method
US10347001B2 (en) Localizing and mapping platform
Zheng et al. Pedalvatar: An IMU-based real-time body motion capture system using foot rooted kinematic model
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
CN109284006B (en) Human motion capturing device and method
CN111194122A (en) Somatosensory interactive light control system
CN109242887A (en) A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
Oskiper et al. Stable vision-aided navigation for large-area augmented reality
CN108544494A (en) A kind of positioning device, method and robot based on inertia and visual signature
CN112179373A (en) Measuring method of visual odometer and visual odometer
KR20120059824A (en) A method and system for acquiring real-time motion information using a complex sensor
GB2466714A (en) Hybrid visual and physical object tracking for virtual (VR) system
US20170004631A1 (en) Method and system for visual pedometry
CN112114660A (en) Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range
Li et al. Visual-Inertial Fusion-Based Human Pose Estimation: A Review
CN112556681B (en) Vision-based navigation and positioning method for orchard machine
Mohareri et al. Autonomous humanoid robot navigation using augmented reality technique
Irmisch et al. Robust visual-inertial odometry in dynamic environments using semantic segmentation for feature selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201222

WD01 Invention patent application deemed withdrawn after publication